WO2015049934A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2015049934A1
WO2015049934A1 PCT/JP2014/072227 JP2014072227W WO2015049934A1 WO 2015049934 A1 WO2015049934 A1 WO 2015049934A1 JP 2014072227 W JP2014072227 W JP 2014072227W WO 2015049934 A1 WO2015049934 A1 WO 2015049934A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory
predicted
locus
prediction
predicted trajectory
Prior art date
Application number
PCT/JP2014/072227
Other languages
French (fr)
Japanese (ja)
Inventor
健太郎 井田
由幸 小林
宏之 水沼
山野 郁男
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2015049934A1 publication Critical patent/WO2015049934A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04105Pressure sensors for measuring the pressure or force exerted on the touch surface without providing the touch position

Definitions

  • the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • the present invention relates to an information processing apparatus, an information processing method, and a program that perform display control for displaying a real trajectory and a predicted trajectory in an easily distinguishable manner in displaying a drawing trajectory on a touch panel display.
  • a touch panel type input display device is a device that enables information input by touching a display surface with a finger or a pen as well as information display processing using, for example, a liquid crystal display.
  • a contact position with a finger or a pen on the display surface is detected, and processing according to the detected position, for example, drawing processing is enabled.
  • There are various methods such as an electromagnetic induction method and a capacitance method for detecting the position of the pen or finger.
  • Patent Document 1 Japanese Patent Laid-Open No. 09-190275
  • Patent Document 1 discloses a configuration in which a static trajectory (function) is used to predict and draw a locus that has not been processed.
  • a linear approximation line is generated using the latest locus region in a locus for which drawing has been completed, and the generated linear approximation line is extended and displayed as an unprocessed (undrawn) portion locus. It is.
  • this disclosed technique only performs a process of predicting the previous trajectory by an approximation process using only the drawing trajectory immediately before the unprocessed area. For example, a sudden speed change or a direction change is performed. In some cases, an overshoot of the predicted point or a large discrepancy between the actual pen tip position and the predicted point may occur.
  • the present disclosure has been made in view of the above-described problems, for example, by clearly distinguishing and displaying the actual trajectory and the predicted trajectory so that the user can perform smooth drawing processing without causing a war. It is an object to provide an information processing apparatus, an information processing method, and a program that are made possible.
  • the first aspect of the present disclosure is: A data processing unit that performs display control processing of a locus according to input position information generated by user operation input; The data processing unit An actual trajectory identified based on the input position information; A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process; Is in an information processing apparatus that performs display control for displaying the image in a different manner.
  • the predicted trajectory is a trajectory predicted based on an actual trajectory specified in the past.
  • the data processing unit displays at least one of a color, a transparency, and a thickness of the predicted locus in a manner different from the actual locus.
  • the data processing unit sets a color, transparency, or thickness of the predicted trajectory to be different from the actual trajectory, It is displayed in a mode that is less conspicuous than the actual trajectory.
  • the data processing unit determines a display mode of the predicted trajectory according to the reliability of the predicted trajectory.
  • the data processing unit determines at least one of a color, a transparency, or a thickness of the predicted trajectory according to the reliability of the predicted trajectory. To display.
  • the data processing unit changes at least one of a color, transparency, or thickness of the predicted trajectory according to the reliability of the predicted trajectory. As the reliability decreases, the predicted trajectory is displayed in a less conspicuous manner.
  • the data processing unit controls the display unit so that the predicted trajectory is not displayed when the reliability of the predicted trajectory is determined to be lower than a predetermined threshold.
  • the display of the predicted trajectory is changed to an inconspicuous display mode. To do.
  • the input position information is input position information obtained by detecting a contact position of the input object with respect to the touch panel, and the data processing unit is configured so that the input object touches the touch panel.
  • a pressure value representing a pressure is acquired, and a display mode of the predicted locus is determined according to the pressure value.
  • the data processing unit may not display the predicted trajectory when the data processing unit detects that the pressure value decrease is greater than a predetermined threshold value. To control the display.
  • the data processing unit acquires a movement amount per unit time of the input position information, and determines a display mode of the predicted locus according to the movement amount. .
  • the data processing unit stops displaying the predicted trajectory when detecting that the movement amount is less than a predetermined threshold value.
  • the input object is a stylus.
  • the data processing unit detects a plurality of similar trajectories similar to the immediately preceding trajectory of the predicted trajectory calculation area as the predicted trajectory calculation process from past trajectories, A subsequent trajectory of each similar trajectory is estimated based on the detected plurality of similar trajectories, and a predicted trajectory is calculated by averaging or weighting addition processing of the estimated plurality of subsequent trajectories.
  • the second aspect of the present disclosure is: An information processing method executed in an information processing apparatus,
  • the data processing unit performs a trajectory display control process according to the input position information generated by the user's operation input,
  • the data processing unit in the display control process, An actual trajectory identified based on the input position information;
  • the third aspect of the present disclosure is: A program for executing information processing in an information processing apparatus; Causing the data processing unit to perform display control processing of the locus according to the input position information generated by the user's operation input; In the display control process, An actual trajectory identified based on the input position information; A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process; Is in a program for executing display control for displaying the image in a different manner.
  • the program of the present disclosure is a program that can be provided by, for example, a storage medium or a communication medium provided in a computer-readable format to an image processing apparatus or a computer system that can execute various program codes.
  • a program in a computer-readable format, processing corresponding to the program is realized on the information processing apparatus or the computer system.
  • system is a logical set configuration of a plurality of devices, and is not limited to one in which the devices of each configuration are in the same casing.
  • display control of a predicted trajectory predicted according to past trajectory information is realized. Specifically, it has a data processing unit that performs a display control process of the trajectory according to the input position information, and the data processing unit displays the trajectory after the trajectory calculation process according to the input position information is completed as an actual trajectory, A trajectory in an area where the trajectory calculation process according to the input position information is not completed is estimated as a predicted trajectory, and the estimated predicted trajectory is displayed in a manner different from the displayed actual trajectory. For example, at least one of the color, transparency, and thickness of the predicted trajectory is displayed in a manner different from the actual trajectory, and the predicted trajectory is displayed in a manner that is less conspicuous than the actual trajectory. Further, a process for hiding the predicted trajectory is performed according to the reliability. With this configuration, display control of a predicted trajectory predicted according to past trajectory information is realized. Note that the effects described in the present specification are merely examples and are not limited, and may have additional effects.
  • FIG. 1 is a diagram showing an example in which a line segment (line) is drawn on a touch panel type input display device 10 using a pen type input device 11.
  • the input device 11 sequentially moves in the direction of the arrow, and a locus according to the locus of the input device 11 is drawn. That is, a line (line) as a locus of the input device is displayed.
  • the locus drawing process cannot follow the moving speed of the input device 11, and the display of the locus line segment may be delayed.
  • the drawn trajectory 21 shown in the figure is a line displayed on the display according to the trajectory of the input device.
  • the latest drawing position 22 of the drawn trajectory 21 does not coincide with the current pen tip position 15 of the input device 11, and is at the position of the trajectory of the input device 11 before a predetermined time.
  • the drawing delay area 23 shown in the figure is an area where the pen has already been moved according to a predetermined locus, but a line (line segment) corresponding to the locus is not displayed in time, and is a blank area. Such a blank area occurs due to a delay in the drawing process.
  • a linear approximation straight line is generated from the latest drawing portion of the drawn locus, and the line corresponding to the predicted locus is drawn by extending beyond the latest drawing position 22.
  • a predicted locus different from the actual pen locus may be generated, and an incorrect locus may be drawn.
  • Prediction error example 1 is an example of a prediction error due to overshoot when the pen trajectory suddenly curves and the pen traveling direction fluctuates greatly. At the latest drawing position 32 of the drawn line 31, as shown by the pen locus 34 shown in the drawing, the pen rapidly changes its traveling direction.
  • the data used for the prediction is only a drawn trajectory having a predetermined length including the latest drawing position 32 in the drawn trajectory 31. That is, only the approximate application trajectory 33 shown in the figure is data applied to the prediction.
  • a line extended according to the line direction of the approximate application locus 33 is set as the predicted locus 35 and displayed.
  • a predicted track 35 is set at a position completely different from the actual pen track 34.
  • prediction error example 2 Another prediction error example shown in FIG. 2 is (b) prediction error example 2, which is an example in a case where a pen as an input device suddenly stops. This is an example of processing when the pen stops moving and stops at the latest drawing position 42 of the drawn trajectory 41.
  • the data used for the prediction is a drawn line of a predetermined length including the latest drawing position 42 in the drawn locus 41.
  • the approximate application locus 43 shown in the figure is data applied to the prediction.
  • a line extended according to the line direction of the approximate application locus 43 is set as the predicted locus 45.
  • the predicted trajectory 45 is set before the actual pen position.
  • the line segment prediction process using the static filter is a process of applying the line segment immediately before the drawn trajectory to the approximation process. Therefore, if the pen movement mode is different from the approximate application trajectory, The prediction result is completely different from the actual locus of the pen, and an incorrect prediction locus is displayed.
  • FIG. 3 is a diagram illustrating an example of a trajectory prediction process of the information processing apparatus according to the present disclosure.
  • FIG. 4 is a diagram illustrating an outline of a trajectory prediction process performed by the information processing apparatus according to the present disclosure.
  • a learning process machine learning
  • a predicted trajectory 73 is set by applying the learning result to perform the drawing process.
  • the predicted locus 73 is drawn ahead of the latest drawing position 72.
  • the predicted trajectory is estimated by a learning process using not only the information of the drawn trajectory immediately before the latest drawing position but also the information of the past drawing trajectory. This process reduces prediction errors as described above with reference to FIG. 2 and realizes highly accurate prediction.
  • the trajectory prediction process executed by the information processing apparatus of the present disclosure is a dynamic prediction process different from the static prediction using the conventional fixed static filter (function) described above.
  • the static prediction the line segment immediately before the drawn trajectory is applied to the approximation process, and the past trajectory information does not affect the predicted trajectory. Therefore, when the immediately preceding trajectory applied to prediction is constant, the predicted trajectory is a constant trajectory.
  • trajectory prediction is performed with reference to past trajectory information other than the line segment immediately before the drawn trajectory, and these past trajectories affect the predicted trajectory. Therefore, even if the previous trajectory applied to the prediction is constant, the predicted trajectory is different if the past trajectory is different.
  • the past trajectory includes a trajectory before the previous trajectory and includes a trajectory that is not displayed on the display unit.
  • K Nearest Neighbors kNN method
  • k neighborhood method A process for estimating a predicted trajectory by a learning process to which is applied will be described.
  • FIG. 5 is a flowchart illustrating a prediction trajectory estimation and drawing process sequence to which the kNN method is applied. Note that the processing shown in this flowchart is executed under the control of a data processing unit of the information processing apparatus according to the present disclosure, specifically, a data processing unit including, for example, a CPU having a program execution function.
  • the program is stored, for example, in the memory of the information processing apparatus.
  • the information processing apparatus performs a process of drawing a line according to the locus of an input device (input object) such as a dedicated pen (stylus).
  • an input device input object
  • a dedicated pen stylus
  • a drawing delay area that does not coincide with the drawing process occurs in the area immediately before the current position of the input device (dedicated pen).
  • the flow shown in FIG. 5 is a sequence of processing for drawing a line (line) along the estimated locus by drawing the locus of the input device in the drawing delay area as a predicted locus. For example, it is an estimated drawing process of the predicted trajectory 73 shown in FIG.
  • Step S101 First, in step S101, a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawn trajectory (immediate trajectory) are searched from the drawn trajectories.
  • the drawn trajectory is a trajectory region in which the trajectory analysis of the input device is completed and the line drawing processing corresponding to the trajectory, that is, the output display processing for the display unit is completed.
  • the latest drawing trajectory (immediately preceding trajectory) is the latest trajectory region in the drawn trajectory, and is a trajectory region in contact with the drawing delay unit.
  • the immediately preceding locus is a drawn area of a predetermined length including the latest drawing position 72 shown in FIG.
  • k a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawing trajectory (immediate previous trajectory) are searched from the drawn trajectories.
  • k is a predetermined number such as 3, 5, 10, 20, 30 or the like.
  • Step S102 Next, in step S102, subsequent trajectories of a plurality (k) of trajectory regions (similar trajectories) searched in step S101 are estimated or selected, and these multiple subsequent trajectories are connected to the latest drawing position.
  • the latest drawing position corresponds to the latest drawing position 72 shown in the example shown in FIG.
  • Step S103 Next, in step S103, an average trajectory of a plurality of connected subsequent trajectories is calculated.
  • Step S104 a drawing process is executed with the average trajectory calculated in step S103 as the final determined predicted trajectory.
  • FIG. 6 shows drawing lines similar to those shown in FIG. This is an example in which an input device such as a dedicated pen draws a locus from the left to the right, and a line along the locus is drawn.
  • an input device such as a dedicated pen draws a locus from the left to the right, and a line along the locus is drawn.
  • the dedicated pen that is an input device proceeds ahead of the latest drawing position 81 of the drawn locus, and determines and draws a predicted locus according to the estimated locus before the latest drawing position 81.
  • the finally set prediction trajectory is the final determined prediction trajectory (Qf) 86 shown in FIG.
  • the final determined predicted trajectory (Qf) 86 is determined by a process such as averaging the k auxiliary predicted trajectories Q1 to Q3 set according to the extracted k similar trajectories.
  • Step S101 First, in step S101, a plurality (k) of trajectory areas (similar trajectories) similar to the latest drawing trajectory (immediate trajectory) are searched from the drawn trajectories.
  • the latest drawing locus (immediately preceding locus P) 82 is extracted from the drawn locus 80 shown in FIG. Further, a plurality (k) of trajectory regions similar to the extracted latest drawn trajectory (previous trajectory P) 82 are searched from the drawn trajectories.
  • k 3
  • step S101 is a process of searching k similar trajectories similar to the latest drawing trajectory (previous trajectory P) 82 from the drawn trajectories.
  • Step S102 Next, in step S102, the subsequent trajectories of the plural (k) trajectory regions (similar trajectories) searched in step S101 are selected, and the selected plural subsequent trajectories are connected to the end of the latest drawing position.
  • step S101 three similar trajectories shown in FIG. Similar trajectory (R1) 83-1, immediately preceding trajectory (P), Similar trajectory (R2) 83-2 of the previous trajectory (P), Similar locus (R3) 83-3 of the immediately preceding locus (P), These similar trajectories were searched.
  • step S102 the subsequent trajectory of these similar trajectories is selected.
  • the following three following trajectories are selected.
  • Subsequent locus (A2) 84-2 of the similar locus (R2) Subsequent locus (A3) 84-3 of the similar locus (R3),
  • these three subsequent loci 84-1 to 8-3 are connected to the tip of the latest drawing position 81.
  • the angle between the immediately preceding locus P and the subsequent locus to be connected is an angle that matches the connection angle between the succeeding locus to be connected and the similar locus corresponding to the succeeding locus.
  • the three subsequent trajectories A1 to A3 are connected to the latest drawing position 81.
  • the connected trajectories are the following three auxiliary prediction trajectories.
  • the process in step S102 is a process for connecting the subsequent locus of the similar locus detected in step S101 to the end of the latest drawing position.
  • Step S103 Next, in step S103, an average trajectory of a plurality of connected subsequent trajectories is calculated. This process will be described with reference to FIG.
  • the following trajectory connected to the end of the latest drawing position 81 is the following three auxiliary prediction trajectories.
  • step S103 average trajectories of these three auxiliary predicted trajectories (Q1 to Q3) 85-1 to 3 are calculated.
  • the average trajectory obtained is the trajectory shown in the final determined predicted trajectory (Qf) 86 shown in FIG.
  • Step S104 a process of drawing the average locus calculated in step S103 as the final decision prediction line (final decision prediction locus) is executed. This process will be described with reference to FIG.
  • step S104 a process of drawing the average trajectory calculated in step S103, that is, the final determined predicted trajectory (Qf) 86 shown in FIG. 6 as a predicted trajectory is executed.
  • the information processing apparatus stores, in a memory, coordinate information (x t , y t ) of a drawing trajectory newly displayed for each image frame (t) displayed on the display unit.
  • the information processing apparatus stores, in a memory, coordinate information corresponding to a drawing trajectory of a past fixed time from the latest drawing position, and using this coordinate information, a trajectory similarity determination process, a subsequent trajectory connection process, and a connection
  • the final predicted trajectory determination process is performed by calculating the average value of the subsequent trajectories.
  • the coordinate information (x, y) is stored in the memory corresponding to each display frame (t).
  • the coordinates indicating the latest drawing trajectory position corresponding to the frame t are stored in the memory as (x t , y t ).
  • the coordinates indicating the latest drawing locus position corresponding to the next frame t + 1 are stored in the memory as (x t + 1 , y t + 1 ).
  • coordinate information as trajectory position information associated with each frame is stored in the memory, and trajectory similarity determination processing and the like are executed using this information.
  • a specific processing example using the locus coordinate information will be described.
  • step S101 a process of searching the latest drawn locus (immediately preceding locus) and a similar locus from the drawn locus is performed.
  • the feature amount of the locus is calculated using the coordinate information of the locus, and the similarity is determined by comparing the feature amounts.
  • the following feature amounts are calculated as the feature amounts applied to the similarity determination.
  • Speed: Z1 Acceleration: Z2
  • Angle: Z3 Angle difference: Z4
  • Each of these feature amounts is calculated using coordinate information corresponding to the drawing trajectory.
  • the feature amount calculated from the coordinate information constituting the latest drawing locus is compared with the feature amount calculated from the coordinate information constituting the drawn locus, and the drawn locus having a more similar feature amount.
  • similar trajectories R1 to R3 84-1 to 3 shown in FIG. 6 are extracted.
  • Z1 (t) is the velocity at frame t
  • Z2 (t) is the acceleration at frame t
  • Z3 (t) is the angle at frame t
  • Z4 (t) is the angular difference at frame t
  • t is a parameter indicating a frame number in the present embodiment, but processing in which t is set as time information instead of the frame number is also possible. That is, in the following description, frame t and frame u can be replaced with time t and time u.
  • FIG. 7 shows a part of the drawn trajectory.
  • the coordinates (x t ⁇ 2 , y t ⁇ 2 ) to (x t + 1 , y t + 1 ) of the latest positions of the drawing trajectories at the time of displaying each frame from frame t ⁇ 2 to frame t + 1 displayed on the display unit are shown. Show. That is, the following four points P1 to P4.
  • x corresponds to the horizontal direction in the figure
  • y corresponds to the vertical direction.
  • P1 Latest position coordinates (x t-2 , y t-2 ) of the drawing trajectory displayed in the frame t-2
  • P2 Latest position coordinates (x t ⁇ 1 , y t ⁇ 1 ) of the drawing trajectory displayed in the frame t ⁇ 1
  • P3 Latest position coordinates (x t , y t ) of the drawing trajectory displayed in frame t
  • P4 Latest position coordinates (x t + 1 , y t + 1 ) of the drawing trajectory displayed in frame t + 1
  • the above-described feature amounts Z1 (t) to Z4 (t) are calculated as feature amounts corresponding to the coordinates (x t , y t ) corresponding to the frame t. Similar feature values are calculated at coordinate positions corresponding to the frames other than the frame t.
  • a velocity Z1 (t), which is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
  • Speed: Z1 (t) sqrt ⁇ (x t ⁇ x t ⁇ 1 ) 2 + (y t ⁇ y t ⁇ 1 ) 2 ⁇ This corresponds to the distance La between P3 (x t , y t ) and P2 (x t ⁇ 1 , y t ⁇ 1 ) shown in FIG. That is, the distance traveled by the trajectory from frame t-1 to frame t, and corresponds to the moving speed of the trajectory between one frame.
  • P2 (x t ⁇ 1 , y t ⁇ 1 ) and P1 (x This corresponds to the ratio La / Lb between the distance Lb and the distance t ⁇ 2 , y t ⁇ 2 ). That is, the value of how many times the trajectory distance La from the frame t-1 to the frame t corresponds to the multiple of the trajectory distance Lb from the frame t-2 to the frame t-1. Yes, this corresponds to the magnification of the trajectory moving speed between the current frames and the trajectory moving speed between the preceding frames.
  • the feature quantity Z2 (t) indicating acceleration is calculated as a magnification of the speed indicating how many times the subsequent speed corresponds to the previous speed in the above example, but the subsequent speed and the previous speed May be calculated as the difference between In this case, the feature amount Z2 (t) is calculated by the following equation.
  • Angle Z3 (t) is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
  • Angle: Z3 (t) atan ⁇ (x t ⁇ x t ⁇ 1 ) / (y t ⁇ y t ⁇ 1 ) ⁇
  • This is a triangle composed of a line segment La having vertices P3 (x t , y t ) and P2 (x t ⁇ 1 , y t ⁇ 1 ) shown in FIG. 7, a horizontal line Wa, and a vertical line Ha. It is expanded as follows.
  • An angle difference Z4 (t) which is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
  • Angular difference: Z4 (t) atan ⁇ (x t ⁇ x t ⁇ 1 ) / (y t ⁇ y t ⁇ 1 ) ⁇ ⁇ atan ⁇ (x t ⁇ 1 ⁇ x t ⁇ 2 ) / (y t ⁇ 1 -Y t-2 ) ⁇
  • This is a triangular shape composed of a line segment La having apexes P3 (x t , y t ) and P2 (x t ⁇ 1 , y t ⁇ 1 ) shown in FIG.
  • Angle ⁇ of vertex P2 (x t ⁇ 1 , y t ⁇ 1 ) of It corresponds to the difference ( ⁇ ) of each degree.
  • These four feature quantities Z1 to Z4 are sequentially obtained for the latest coordinate point of each frame and stored in the memory.
  • the latest drawing position 81 shown in FIG. 6 is used to set the coordinate information corresponding to the predetermined number of frames, such as the past 100 frames, to a newer frame.
  • the update of the memory data in which the coordinate information of the oldest frame is deleted may be executed in response to the input of the corresponding coordinate information.
  • handwriting-corresponding trajectory data including past data
  • the prediction trajectory may be estimated by performing similarity determination using the past accumulated data.
  • the memory storage data may be associated with the user identifier (user ID), and processing using the user-corresponding storage data may be performed according to the user.
  • the search for the similar trajectory is performed by calculating the feature amount using the coordinate information corresponding to the trajectory for a plurality of frames stored in the memory, and comparing the feature amounts.
  • the comparison target is the feature amount of the immediately preceding locus and the feature amount obtained from the other past locus.
  • t ′ corresponds to the frame number in which the locus of the coordinates (x t ′ , y t ′ ) of the latest drawing position 81 shown in FIG. 6 is displayed as the update locus. That is, it is the frame number on which the latest locus just before the predicted locus has been drawn.
  • t corresponds to a past arbitrary frame displaying coordinates (x t , y t ) of an arbitrary position on the drawn trajectory.
  • i corresponds to a frame section to be subjected to similarity comparison, and corresponds to, for example, the number of frames necessary for drawing the locus of the immediately preceding locus 82 shown in FIG.
  • the immediately preceding trajectory 82 shown in FIG. 6 is a trajectory generated by the display process for five frames, a comparison process using information on coordinate positions for five frames is executed.
  • the similarity determination target for the previous product is selected from the drawn trajectories 80 before the previous trajectory 82. This is the trajectory information for all five frames.
  • wj is a coefficient corresponding to the type of feature quantity. That is, it corresponds to the respective coefficients 1 to 4 of Z1 to Z4.
  • the above weight setting may be used.
  • the feature amount distance D (t, t ′) is calculated, and a predetermined selection number, that is, k pieces, is selected in ascending order of the feature amount distance.
  • the weight wj set for each feature amount may be configured to increase the weight for newer data, for example.
  • it may be configured to calculate feature amount distance data in which a weight according to the distance is set so as to preferentially select a trajectory that is close to the previous trajectory.
  • the formula for calculating the feature amount distance D (t, t ′) when the similarity is calculated by increasing the weight for newer data is set as follows.
  • Weight attenuation rate set in advance (0.0 ⁇ ⁇ 1.0)
  • pow ( ⁇ , i) Weight (function that outputs 1.0 at a newer locus position and outputs a smaller value as it gets older)
  • the feature distance may be calculated by setting a larger weight of a new locus.
  • step S101 a plurality of similar trajectories selected in step S101 are connected to the tip of the latest drawing position, and the averaging process is performed to obtain a final predicted trajectory. The process of determining is performed. This predicted locus determination process will be described.
  • the coordinates (x u , y u ) constituting the predicted trajectory are calculated according to the following formula (predicted trajectory coordinate calculation formula).
  • yu (1 / k) [Sigma]
  • kn 1 yu (n) '
  • u in the predicted trajectory coordinate calculation formula corresponds to a frame number.
  • the frame number of the display frame at the latest drawing position 81 shown in FIG. 6 is t
  • the frame number is after the frame number t
  • the frame u corresponds to a future frame in which the drawing of the locus has not been completed.
  • k is the number of extracted similar trajectories.
  • n is a variable of 1 to k.
  • V u (n) V u ⁇ 1 (n) Z 2 (sn + ut)
  • a u (n) a u ⁇ 1 (n) Z 4 (sn + u ⁇ t)
  • the xy coordinates in the future frame u corresponding to the k similar trajectories are calculated. These coordinates are xy coordinates constituting the k auxiliary prediction trajectories shown in FIG.
  • the predicted trajectory coordinate calculation formula described above is used. Use to calculate the constituent coordinates of the final predicted trajectory. That is, as described above, the coordinates (x u , yu ) constituting the predicted trajectory are calculated according to the following formula (predicted trajectory coordinate calculation formula).
  • the final predicted trajectory is calculated by simply averaging the constituent coordinates of the predicted trajectory calculated based on all similar trajectories.
  • a configuration may be adopted in which the final predicted trajectory coordinates are calculated by increasing the weight of the trajectory and performing addition averaging.
  • the coordinates (x u , yu ) constituting the predicted trajectory are calculated according to the following formula (predicted trajectory coordinate calculation formula).
  • the coordinates of the predicted trajectory may be calculated by performing weighting according to the similarity.
  • the prediction accuracy by the learning process is increased for a frequently appearing trajectory, but the prediction accuracy is decreased for a trajectory having a low frequency. In extreme terms, accuracy is reduced except for straight lines and relatively smooth curves. This is particularly noticeable when learning progress is not sufficient. If the accuracy of the prediction decreases, the probability that the predicted locus is drawn at a position different from the position of the input device such as the dedicated pen increases. Such drawing of an incorrect predicted trajectory may be annoying for the user. Therefore, when it is determined that the prediction accuracy is low, it is preferable not to reflect the prediction result in the drawing.
  • this processing example will be described.
  • FIG. 9 shows a drawn trajectory 90 and a previous trajectory 91 that is the tip portion thereof.
  • a predicted locus is set ahead of the immediately preceding locus 91.
  • k auxiliary prediction trajectories 92 are calculated at an arbitrary time point. Since the k predicted trajectories represent k similar trajectories in the past, in the case of a simple trajectory such as a straight line, the positions of the k trajectories converge, and conversely, When there is a change or the like, the positions of the k trajectories are scattered.
  • auxiliary predicted trajectories 92 are “disjoint” means that “the accuracy of an optimal past trajectory that is close to the current trajectory is low. Therefore, the standard deviation ⁇ at each coordinate position of the k auxiliary prediction trajectories is calculated, and processing corresponding to the calculated standard deviation ⁇ is performed. Specifically, only when the calculated standard deviation is small, the average of k auxiliary predicted trajectories is calculated, and the coordinate position of the final determined predicted trajectory is determined and drawn. Thereby, the mistake of the prediction perceived by the user can be reduced.
  • This is a circle conceptually showing the size of the standard deviation. The larger the size of the circle, the larger the standard deviation, that is, the greater the variation in the auxiliary predicted trajectory, indicating that the final determined predicted trajectory to be calculated as an average value is uncertain.
  • the final position of the immediately preceding trajectory 91 corresponds to the coordinate position of the trajectory displayed in the frame t
  • the deviation is shown.
  • FIG. 10 is a diagram illustrating a flowchart for explaining a prediction trajectory estimation and drawing process sequence to which the kNN method is applied, similarly to the flow of FIG. 5 described above. Note that the processing shown in this flowchart is executed under the control of a data processing unit of the information processing apparatus according to the present disclosure, specifically, a data processing unit including, for example, a CPU having a program execution function. The program is stored, for example, in the memory of the information processing apparatus.
  • Step S201 Steps S201 to S202 are the same processing as the processing of steps S101 to S102 in the flow shown in FIG.
  • step S201 a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawing trajectory (immediate previous trajectory) are searched from the drawn trajectories.
  • the drawn trajectory is a trajectory region in which the trajectory analysis of the input device is completed and the line drawing processing corresponding to the trajectory, that is, the output display processing for the display unit is completed.
  • the latest drawing trajectory (immediately preceding trajectory) is the latest trajectory region in the drawn trajectory, and is a trajectory region in contact with the drawing delay unit.
  • the immediately preceding locus is, for example, a drawn area of a predetermined length including the latest drawing position 72 shown in FIG.
  • k a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawing trajectory (immediate previous trajectory) are searched from the drawn trajectories.
  • k is a predetermined number such as 3, 5, 10, 20, 30 or the like.
  • Step S202 Next, in step S202, subsequent trajectories of a plurality (k) of trajectory regions (similar trajectories) searched in step S201 are estimated or selected, and the plurality of subsequent trajectories are connected to the tip of the latest drawing position.
  • the latest drawing position corresponds to the latest drawing position 72 shown in the example shown in FIG.
  • Step S203 the process takes into account the standard deviation of the predicted trajectory. For a plurality of prediction trajectories corresponding to a plurality of similar trajectories, that is, k auxiliary prediction trajectories shown in FIG. 9, standard deviation calculation is executed for each future frame, and processing is performed for each frame according to the calculation result.
  • Frame t is the frame number of the frame that displays the latest drawing position at the tip of the immediately preceding locus.
  • step S204 the standard deviation ⁇ u of the coordinate position in the frame U of the k auxiliary prediction trajectories is calculated. Note that the coordinate positions corresponding to the k auxiliary predicted trajectories are executed as the same process as described with reference to the flow of FIG. In step S204, the standard deviation ⁇ u of k coordinate positions corresponding to k frames U is further calculated.
  • Step S205 it is determined whether or not drawing of the predicted trajectory is appropriate based on the standard deviation ⁇ u corresponding to the frame U, that is, whether or not a predicted trajectory with reliability can be drawn.
  • ⁇ Vt corresponds to the drawing determination threshold value.
  • Pre-set coefficient
  • Vt Drawing speed in the previous trajectory.
  • Vt corresponds to the length s of the immediately preceding locus in FIG. s corresponds to the distance of the trajectory advanced between one frame t-1 and frame t.
  • Vt is the length of the trajectory advanced in the immediately preceding frame, and corresponds to the speed of the immediately preceding trajectory 81.
  • the coefficient ⁇ can be changed according to the situation, for example, if the setting is to allow drawing only when the reliability is high, the value is reduced, and if the drawing is allowed even at low reliability, the value is increased. For example, a value that can be set by the user.
  • step S206 If the above determination formula is satisfied, that is, if the standard deviation ⁇ u is smaller than the threshold value, it is determined that a relatively reliable prediction trajectory can be determined, and the process proceeds to step S206. On the other hand, if the determination formula is not satisfied, that is, if the standard deviation ⁇ u is equal to or greater than the threshold, it is determined that it is difficult to draw a highly reliable predicted trajectory, and the process proceeds to step S211.
  • Step S206 an average coordinate of k coordinates corresponding to the frame U of a plurality (k) of auxiliary prediction trajectories is calculated.
  • step S207 the average coordinates calculated in step S206 are determined as the coordinates of the final determined predicted trajectory corresponding to the frame U, and the predicted trajectory drawing process is executed.
  • Step S208 it is determined whether or not there is an unprocessed frame in which the drawing process of the freem U is to be executed. If there is, the process proceeds to step S209, and if not, the process ends.
  • the frame number that has been set is updated, and the processing of the next frame is started in step S204 and subsequent steps. When all the unprocessed frames have been processed, the process ends.
  • Step S211 is a process executed when No is determined in the determination process of step S205.
  • the process is executed when it is determined in step S205 that the standard deviation ⁇ u is equal to or greater than the threshold value and it is difficult to draw a highly reliable predicted trajectory.
  • step S211 a determination is made to end the prediction process. Or the prediction system change process switched to the conventional static prediction process is performed.
  • the pressure value for the display unit of the input device is detected by, for example, the input device or a pressure detection sensor set on the surface of the display unit, and the detected value is input to the control unit and used for reliability calculation. Is done.
  • the pressure value gradually decreases from several frames before the pen leaves (releases) the display unit surface.
  • the tendency of the pressure value to gradually decrease becomes noticeable when the character “Hara” of the character is drawn.
  • the pressure value in the frame t (or time t) as p t, is defined as follows a feature amount Z5, Z6.
  • Pressure value of frame t: Z5 (t) pt
  • Pressure value change amount of frame t: Z6 (t) p t ⁇ p t ⁇ 1
  • the predicted pen pressure (pressure value) pu in the future frame u (or future time t) on the predicted trajectory can be calculated by the following equation.
  • U is the frame number (or time) k is the number of extracted similar trajectories.
  • n is a variable of 1 to k.
  • the learning pattern is changed according to the type of drawing object input by the user. Specifically, if the user's drawing object is a picture, a character, or even a character, the type, for example, alphabet, Japanese, kanji or hiragana, etc. Change the learning pattern according to the type.
  • the prediction process is executed by applying the fact that the shape features differ for each drawing object such as a picture or text. For example, compared to alphabets, Kanji has a straight line or sharp angle switch, has many short lines, etc., and the characteristics are greatly different from alphabets that use a lot of long curves as a whole. As described above, by changing the feature amount used for the extraction of the similar trajectory and the determination of the predicted trajectory according to the type of the drawing object, it is possible to perform processing with higher accuracy.
  • a right-handed user holds the pen in the right hand, and when drawing a line from left to right, the space on the left side of the right hand can be observed well. However, if you draw a line from right to left, the right side of the space will be hidden by hand.
  • (A) Combined use with word prediction processing For example, when the user's drawing object is a document, word prediction can be performed, and the prediction accuracy can be improved using the word prediction result. For example, the next character to be written is estimated by word prediction, the locus corresponding to the estimated character is estimated, and the weight of the auxiliary prediction locus similar to the estimation result is increased.
  • (B) Processing for switching between dynamic prediction and static prediction The process of searching for similar trajectories from the past trajectories and applying the learning process for determining the predicted trajectory is a dynamic prediction process. However, the trajectory estimation process by linear approximation using only the data of the previous trajectory is performed. Conventional static prediction processing performed may be effective.
  • a character database (font database) is stored in a memory.
  • Many of such devices have a function of performing a process of searching a character database for characters similar to a user's drawing trajectory. Using this function, a character similar to the character written by the user is selected from the database, a trajectory corresponding to the selected character is estimated, and the weight of the auxiliary predicted trajectory similar to the estimation result is increased. .
  • (D) Example of prediction processing independent of character scale For example, when a user draws a character, the size (scale) differs depending on the time and the case. In the above-described embodiment, in the determination of the similar trajectory, when the scale is different, it may be difficult to determine the personal similarity. In order to solve this problem, for example, a character size of one character drawn by the user is determined, a process of converting the determined character size into an absolute scale having a preset standard size is performed, and a locus having the absolute scale is executed. A comparison process using is performed. Note that the scale conversion processing is executed by combining, for example, enlargement processing, reduction processing, interpolation processing, and thinning processing. By performing such processing, it is possible to reduce erroneous determinations based on scale differences.
  • Various settings can be made as to how far the predicted trajectory is drawn. For example, it may be set to perform maximum processing according to the processing capability of the information processing apparatus, set to a predetermined distance from the actual pen position specified in advance, to the position in front, or set to a user's desired range It is good also as a possible structure. Note that the processing capability (performance) of the information processing apparatus may be benchmarked and calculated, and the prediction trajectory drawing range may be set in accordance with the calculation result.
  • the drawing type of the predicted trajectory is determined by determining the device type of the information processing terminal. Or it is good also as a structure which determines the drawing range of a prediction locus
  • the past similar trajectory is stored in the memory in association with the user ID that is the identifier of the user who has drawn the trajectory, and the similarity determination or predicted trajectory is stored.
  • the setting for determining whether or not the user is right-handed or left-handed or changing the drawing setting of the predicted trajectory according to the dominant hand information may be adopted.
  • the following processing can be applied.
  • information such as the product number of the information processing apparatus terminal, the ID of the touch panel used in the terminal, the ID of the touch panel driver, and the ID of the graphic chip is acquired.
  • the delay time is estimated by collating with a database storing correspondence data of these IDs and delay times, and the number of predicted frames is determined.
  • the database referred to at this time may be stored in the information processing terminal, or may be configured to be placed on a server on the network.
  • the server is used, the acquired ID information is transmitted from the terminal to the server, and the estimated delay time or the predicted number of frames is received from the server.
  • the configuration may be such that the number of predicted frames is determined by actually measuring the delay time. That is, the time from input detection by the input device to the completion of drawing may be measured internally, and the number of predicted frames may be determined based on the measured time.
  • input detection and detection of drawing completion by the input device may be estimated based on signals from the touch detection driver and the graphic driver, or may be estimated by recognizing an image based on an image captured by the camera.
  • trajectory prediction process it is good also as a structure which can be switched according to an input device whether a locus
  • the input device is a dedicated pen
  • trajectory prediction is executed, and when the input device is a finger (touch input), the trajectory prediction may not be executed.
  • Other input devices include, for example, a mouse, and can be configured to switch whether or not to perform trajectory prediction according to various input devices.
  • trajectory prediction process it is good also as a structure which can set whether a locus
  • the learning result data applied to the trajectory prediction is stored in the memory as log data associated with the user ID of the application.
  • the user-corresponding learning data stored in the memory can be used by inputting the user ID, and the user-corresponding learning result is used. Trajectory prediction can be executed immediately.
  • the predicted trajectory display control processing method will be described sequentially in the following five processing examples.
  • the “drawn locus” corresponding to the actual trajectory is displayed as a solid line, and the “predicted trajectory” is displayed by changing at least one of the color, the transmittance, and the thickness so as not to stand out from the solid line.
  • the predicted trajectory is updated every frame. That is, the displayed predicted trajectory is erased in the next drawing frame, and a new predicted trajectory is drawn.
  • At least one of length, color, transmittance, and thickness is changed according to the reliability index value of the predicted trajectory.
  • FIG. 11 is a diagram similar to FIG. 9 described above, and shows an example of a plurality of (k) auxiliary prediction trajectories applied to the prediction trajectory determination process and the standard deviation of each coordinate position of the auxiliary prediction trajectory. It is.
  • FIG. 11 shows a drawn trajectory 100 displayed according to an actual trajectory that is an actual trajectory of a dedicated pen as an input device, and auxiliary prediction calculated by the above-described embodiment, for example, processing according to the flow shown in FIG.
  • a trajectory 103 and a predicted trajectory 105 calculated as an average trajectory of the auxiliary predicted trajectory 103 are shown.
  • the tip of the drawn trajectory 100 is the latest drawing position 102 of coordinates (x, y), and the latest drawing position 102 is the latest drawing position 102 displayed in the frame t.
  • the drawing locus from frame t-1 to frame t is the locus immediately before the length s.
  • k auxiliary prediction trajectories 103 are calculated at an arbitrary time point based on the kNN method.
  • the k prediction trajectories have different variations depending on the situation.
  • the large variation in the plurality of auxiliary prediction trajectories 103 means that “the accuracy of the optimum past trajectory that is close to the current trajectory is low. That is, the prediction trajectory 105 set by taking k average values. Is less likely to be equal to the actual trajectory.
  • the standard deviation ⁇ of each coordinate position of the k auxiliary prediction trajectories is calculated as an index indicating the degree of variation.
  • a circle conceptually showing the size of the standard deviation of coordinate points. The larger the size of the circle, the larger the standard deviation, that is, the greater the variation in the auxiliary predicted trajectory, indicating that the final determined predicted trajectory to be calculated as an average value is uncertain.
  • the final position of the previous trajectory 101 corresponds to the coordinate position of the trajectory displayed in the frame t
  • the standard deviation is shown.
  • this standard deviation is used as a reliability index value corresponding to the predicted trajectory, and display control corresponding to the reliability index value is executed, for example.
  • the predicted trajectory 105 calculated by the process according to the flow shown in FIG. 5 is drawn ahead of the latest drawing position (x, y) 102 displayed in the frame t.
  • the latest drawing position 102 is the latest drawing position 102 displayed in the frame t.
  • the drawing locus from frame t-1 to frame t is the locus immediately before the length s.
  • the predicted trajectory 105 calculated by the process according to the flow shown in FIG. 5 is drawn ahead of the latest drawing position (x, y) 102 displayed in the frame t.
  • the predicted trajectory is set by setting to connect the following points.
  • a predicted point 113 composed of predicted coordinates (px3, py3) of the future frame u t + 3
  • the predicted trajectory 105 is set and displayed with the setting connecting these points.
  • the predicted trajectory 105 is a trajectory calculated as an average trajectory of auxiliary predicted trajectories calculated according to a plurality of similar trajectories as described above with reference to FIGS. That is, as described above with reference to FIG. 6, a plurality (k) of similar trajectories similar to the region including the previous trajectory up to the latest drawing position 102 are extracted from the drawn trajectory 100, and this similarity is extracted. The average value of the k auxiliary predicted trajectories set based on the trajectory is set as the predicted trajectory 105.
  • the extracted k auxiliary prediction trajectories have a predetermined variation.
  • the degree of variation of the k auxiliary prediction trajectories is calculated as a standard deviation ⁇ , and this is used as a reliability index value.
  • the standard deviation calculated based on the coordinate positions corresponding to the k auxiliary prediction trajectories that is, the reliability index value is set as follows.
  • the reliability index value corresponding to each frame is calculated.
  • the reliability index value corresponds to the standard deviation indicating the degree of variation of the auxiliary prediction trajectory as described above, and is a value indicating that the smaller the value is, the higher the reliability is, and the larger the value is, the lower the reliability is.
  • the display mode of the predicted trajectory is controlled according to the reliability index value SD [u] of the predicted trajectory corresponding to each future frame u.
  • u is described as a frame number, it is also possible to perform processing in which u is set as time.
  • the display control according to the reliability index value can be executed as any one or a combination of the five processing modes described above, that is, the following five processing examples.
  • the “drawn locus” corresponding to the actual trajectory is displayed as a solid line, and the “predicted trajectory” is displayed by changing at least one of the color, the transmittance, and the thickness so as not to stand out from the solid line.
  • the predicted trajectory is updated every frame. That is, the displayed predicted trajectory is erased in the next drawing frame, and a new predicted trajectory is drawn.
  • At least one of length, color, transmittance, and thickness is changed according to the reliability index value of the predicted trajectory.
  • Processing example 1 displays the “drawn locus” corresponding to the actual locus as a solid line, and displays the “predicted locus” by changing at least one of the color, the transmittance, and the thickness so as not to stand out from the solid line. It is.
  • FIG. 13 shows examples of display modes of the following trajectories.
  • Trajectory already drawn (corresponding to actual trajectory) (2) Predicted trajectory
  • the following display control is performed for these two trajectories.
  • A) The color to be displayed is controlled, and (1) the drawn trajectory (corresponding to the actual trajectory) is displayed as a black solid line, and (2) the predicted trajectory is displayed as a solid line other than black (for example, red).
  • display control of any one of (A) to (C) or a combination thereof is performed.
  • Fig. 14 shows a specific display example.
  • the example shown in FIG. 14 is an example corresponding to the display control example of FIG. In other words, this is an example of controlling the thickness of a line to be displayed.
  • the display control of this may be developed to set the display mode of the predicted trajectory according to the reliability.
  • FIG. 15 shows a display example of the following trajectories as in FIG. (1) Trajectory already drawn (corresponding to actual trajectory) (2) Predicted trajectory The following display control is performed for these two trajectories.
  • FIG. 16 A specific display example is shown in FIG.
  • the example shown in FIG. 16 is an example corresponding to the display control example of FIG. In other words, this is an example of controlling the thickness of a line to be displayed.
  • the drawn trajectory (corresponding to the actual trajectory) and (2) the predicted trajectory are different from each other, and the display control is performed so that the predicted trajectory is displayed differently depending on the reliability.
  • the user can clearly distinguish the drawn trajectory corresponding to the actual trajectory from the predicted trajectory, and further confirm the reliability of the predicted trajectory.
  • Processing example 2 updates the predicted trajectory for each frame. That is, in this example, the displayed predicted trajectory is erased in the next drawing frame to draw a new predicted trajectory.
  • FIG. 17 shows a display example of the following two consecutive frames.
  • the predicted trajectory connecting the predicted points 1, 111, the predicted points 2, 112, and the predicted points 3, 113 first from the latest drawing position 102 of the drawn trajectory 100 corresponding to the actual trajectory. 105 is displayed. It is assumed that the actual locus at this time is an actual locus 121 shown in the figure. This real locus 121 is not displayed.
  • the latest drawing position is updated to become the updated latest drawing position 122.
  • This position is a position that substantially coincides with the prediction points 1 and 111 in the frame n.
  • each prediction point of the prediction trajectory 105 is also updated, and each prediction point is set as update prediction points 1 to 3 and 131 to 133 at positions ahead of the frame n, and the prediction trajectory is connected so as to connect these update prediction points. 105 is displayed.
  • the prediction line is erased every frame and redrawn as a new point.
  • the prediction line is updated with a length corresponding to the refresh rate of the screen, and a display in which the prediction line is replaced with a highly accurate prediction line for each frame is realized.
  • Processing example 3 is a processing example in which at least one of length, color, transmittance, and thickness is changed according to the reliability index value of the predicted trajectory.
  • This (Processing Example 3) is an extension of (Processing Example 1) and performs display control for changing the display length according to the reliability index value of the predicted trajectory.
  • the reliability index value is a value corresponding to the standard deviation of the auxiliary prediction trajectory described above with reference to FIGS. 11 and 12, and the smaller the value, the higher the reliability of the prediction trajectory, and the larger the value, the prediction trajectory. The reliability of is low.
  • the reliability index value (standard deviation) of each prediction point described above with reference to FIGS. 11 and 12 is compared with a preset threshold value (reliability threshold value), and reliability is determined. If the value index value does not fall below the predetermined threshold value, a response is made not to draw the prediction trajectory immediately before reaching the prediction point.
  • the reliability index values of the three prediction points constituting the predicted trajectory 105 scheduled to be displayed are calculated as follows.
  • (A) Reliability SD [1] of prediction point 1,111 0.08
  • (B) Reliability SD [2] of prediction point 2,112 3.75
  • (C) Reliability SD [3] of predicted point 3,113 8.52
  • the reliability index value is the standard deviation ⁇ of the coordinate position of the auxiliary prediction trajectory corresponding to the similar trajectory used to calculate each prediction point, as described above with reference to FIGS. The smaller the value, the higher the reliability.
  • the threshold value for determining whether or not to display the predicted trajectory is a value that varies according to the speed of the previous trajectory in the drawn trajectory, that is, the moving speed of the previous trajectory 161 shown in FIG. Specifically, a value obtained by multiplying a predetermined constant ⁇ by the moving speed Vt of the immediately preceding locus 161, that is, ⁇ ⁇ Vt Is used as a threshold value.
  • is a preset coefficient.
  • the moving distance s between one frame of the immediately preceding locus can be applied. That is, the moving speed is defined as the moving distance between one frame, and the locus moving distance s between frames shown in FIG. 18 is applied. ⁇ ⁇ s May be used as a threshold value.
  • the above threshold value ⁇ ⁇ s is compared with the reliability index value (standard deviation) SD [u] of the predicted point on the predicted trajectory.
  • SD [u] the reliability index value (standard deviation) SD [u] of the predicted point on the predicted trajectory.
  • u means a future frame number after the display frame of the display frame t at the latest drawing position.
  • a comparison judgment formula between the reliability index value of each prediction point and the threshold value ⁇ ⁇ s, that is, SD [u] ⁇ ⁇ s Whether or not to display the predicted trajectory after each prediction point is determined using the determination formula.
  • the reliability index value SD [u] of a prediction point is a value smaller than the threshold value ⁇ ⁇ s, it is determined that the reliability of the prediction point is high, and a prediction trajectory up to the prediction point is drawn and displayed.
  • the reliability index value SD [u] of the prediction point is a value not smaller than the threshold value ⁇ ⁇ s, it is determined that the reliability of the prediction point is low, and the prediction trajectory up to the prediction point is drawn. Cancel the display.
  • the prediction trajectory drawing determination using the reliability index value of each prediction point shown in FIG. 18 is set as follows.
  • a comparison judgment formula between the reliability index value SD [u] of each prediction point shown in FIG. SD [u] ⁇ ⁇ s The reliability index value of each prediction point is substituted into the comparison determination formula to determine whether or not the above formula is satisfied.
  • the predicted trajectory from the predicted points 2 112 to the predicted points 3 113 is a non-display predicted trajectory 151.
  • Processing example 4 is a processing example in which drawing of a predicted trajectory is turned on / off, that is, switching between display and non-display is performed according to the situation.
  • FIG. 19 shows two detection states as conditions for executing this (processing example 4). For example, when one of the states (A) and (B) shown in FIG. 19 is detected, the on / off control of the predicted trajectory is executed according to this (processing example 4). That is, this is the case of the following situation detection.
  • (A) When it is detected that the amount of decrease in the input device pressure value per unit time is equal to or greater than the specified threshold value [Thp]
  • B When it is detected that the input device movement amount per unit time is less than the specified threshold value [Thd], Even if a state like the above (A) and (B) is detected, processing for turning off the predicted trajectory is performed.
  • the input device is not limited to a dedicated pen, and may be a finger, for example.
  • the upper graph in FIG. 20 is a graph showing the time transition of the pressure value (P) with respect to the display unit of the input device.
  • the horizontal axis represents time (t), and the vertical axis represents the pressure value (P) with respect to the display unit of the input device.
  • the time (t) on the horizontal axis may be replaced with the frame number of the display frame.
  • the transition of the pressure value (P) from time T1 to T5 is as follows.
  • Time T1: Pressure value 0.53
  • Time T2: Pressure value 0.54
  • Time T3: Pressure value 0.42
  • Time T4: Pressure value 0.30
  • Time T5: Pressure value 0.00
  • the pressure value gradually decreases with time. For example, it is estimated that the pen is gradually away from the display unit.
  • the lower graph in FIG. 20 is a graph created based on the time transition of the pressure value in the upper section, and is a graph showing the time transition of the pressure value difference data, which is the amount of change in the pressure value per unit time. is there.
  • the horizontal axis represents time (t), and the vertical axis represents the pressure value difference ( ⁇ P), which is the difference value per unit time of the pressure value with respect to the display unit of the input device.
  • 0.54-0.53 + 0.01
  • the difference value at time T3 is the difference between the pressure value at time T3 and the pressure value at time T2, and so on.
  • the threshold value [THp] of the pressure value difference is set to ⁇ 0.09.
  • THp ⁇ 0.09 It is.
  • control is performed to stop (OFF) the display of the predicted trajectory when the measured pressure value difference becomes a reduction amount larger than the threshold value ( ⁇ 0.09).
  • the predicted trajectory is displayed at this point. Stop.
  • the graph shown in FIG. 21 is a graph showing the time transition of the movement distance (D) per unit time in the display unit of the input device.
  • the horizontal axis represents time (t), and the vertical axis represents the movement distance (D) per display unit time of the input device.
  • the time (t) on the horizontal axis may be replaced with the frame number of the display frame.
  • the movement distance (D) per unit time is, for example, a movement distance between one display frame.
  • the moving distance 15 shown at time T1 corresponds to the distance moved by the input device during the frame interval from the time (T0) that is the previous frame display timing to the time (T1) that is the frame display time of the month. To do.
  • the transition of the unit time travel distance (D) from time T1 to T5 is as follows.
  • Time T1: Unit time travel distance 15
  • Time T2: Unit time travel distance 10
  • Time T3: Unit time travel distance 3
  • Time T4: Unit time travel distance 2
  • T5: Unit time travel distance 1
  • the unit time moving distance gradually decreases with time. For example, it is estimated that the pen is gradually stopped on the display unit.
  • the threshold value [THd] of the unit time moving distance is set to 4.
  • THd 4 It is.
  • control is performed to stop (OFF) displaying the predicted trajectory.
  • Process example 5 is a process example in which when the predicted trajectory deviates from the actual trajectory, control is performed to change the display mode of the predicted trajectory to make the predicted trajectory inconspicuous.
  • the display example (a) shown in FIG. 22 is a normal display example.
  • a drawn trajectory 100 and a predicted trajectory 105 are displayed.
  • the predicted trajectory 105 is displayed as a line connecting the predicted points 1 to 3.
  • Predicted points 1 to 3 are average positions of constituent coordinates of a plurality of auxiliary predicted trajectories calculated from a plurality of similar trajectories as described above.
  • trajectory 121 shown to a figure is not displayed on a display part.
  • FIG. 22B is a display example when the present processing example 5 is applied.
  • the display area of the predicted trajectory 105 is set as the display mode change area 171 and the predicted trajectory 105 is changed to a display mode that is not conspicuous. This is executed when the reliability of the predicted trajectory is low. Specifically, for example, the process using the reliability index value described above is performed.
  • the reliability index value SD [u] is calculated for each of the prediction points 1,111 to 3,113 set as the constituent points of the prediction trajectory 105.
  • This reliability index value corresponds to the standard deviation ( ⁇ ) of the constituent coordinates of a plurality of auxiliary prediction trajectories (see FIG. 11) applied to the calculation of the prediction points.
  • the display area of the predicted trajectory 105 is set as the display mode change area 171 as shown in FIG.
  • the display mode is changed so that the predicted trajectory 105 is not conspicuous.
  • the flowchart shown in FIG. 23 is a basic sequence of display control processing, and is a flowchart illustrating a sequence for executing display control according to the above-described (Processing Example 1) to (Processing Example 3).
  • the flowchart shown in FIG. 24 is a flowchart for explaining a display control sequence for executing the drawing control of the predicted trajectory according to the pressure value of the input device in addition to the above (processing example 1) to (processing example 3). Note that the processes shown in these flowcharts are executed under the control of a data processing unit of the information processing apparatus according to the present disclosure, specifically, a data processing unit including a CPU having a program execution function, for example.
  • the program is stored, for example, in the memory of the information processing apparatus.
  • Step S302 The information processing apparatus draws a trajectory for the display unit by applying the event information input in step S301, but as described above, an area where the drawing process corresponding to the actual trajectory of the pen is not in time, that is, The drawing delay area 23 described with reference to FIG. 1 is generated.
  • a learning process using trajectory information of a past drawn trajectory is performed. That is, a learning process for estimating a predicted trajectory is performed according to the process described with reference to FIGS. Specifically, as shown in FIG. 6, processing for detecting a similar locus similar to the immediately preceding locus 82 including the latest drawing position 81 of the drawn locus 80 from the drawn locus 80 is performed. In step S302, learning processing such as detection of similar trajectories is executed.
  • Step S303 a prediction trajectory estimation process using the similar region detected in step S302 is executed.
  • k coordinates corresponding to the future frame u estimated according to a plurality (k) of similar trajectories are calculated, and an average position of these k coordinates is determined as a frame.
  • the process of setting the coordinate of the predicted trajectory at u is performed.
  • the coordinates constituting the predicted trajectory and the reliability index value SD of each predicted coordinate are calculated individually. That is, (x 0 , y 0, sd 0 ) to (x n , y n , sd n ) are calculated.
  • the reliability index value SD is a value corresponding to the standard deviation ⁇ of a plurality of estimated coordinates of each future frame calculated based on a plurality of similar trajectories, as described above with reference to FIG. is there.
  • Step S305 the reliability index value SD [i] of the constituent coordinates (x i , y i ) of the predicted trajectory is compared with the threshold value ( ⁇ ⁇ s).
  • is a preset coefficient
  • s is a trajectory moving distance s between one frame in the immediately preceding trajectory 161 as shown in FIG.
  • step S305 SD [i] ⁇ ⁇ s It is determined whether or not the determination formula is satisfied.
  • the constituent coordinates (x i , y i ) of the predicted trajectory have small standard deviations ⁇ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. It means that the variation is small.
  • the constituent coordinates (x i , y i ) of the predicted trajectory have large standard deviations ⁇ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. That is, the variation is large. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, and the process proceeds to step S307.
  • Step S306 If it is determined in step S305 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, a predicted trajectory drawing process is executed in step S306.
  • the predicted trajectory is drawn in a manner different from the eyeless trajectory to be drawn corresponding to the actual trajectory. That is, as described with reference to FIGS. 13 and 14, a predicted trajectory having at least one of color, transmittance, and thickness set differently from the drawn actual trajectory is drawn and displayed. Further, as described with reference to FIGS. 15 and 16, it may be configured to display by changing at least one of color, transmittance, and thickness according to the reliability.
  • the predicted trajectory is updated and displayed sequentially every time a new event occurs.
  • Step S307 On the other hand, if it is determined in step S305 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, the predicted trajectory drawing process is stopped in step S307. In this case, the predicted trajectory is not drawn.
  • the process proceeds to step S3409.
  • step S309 it is determined whether or not there is a next event input. If the next event input is not detected, the process ends. If the next event input is detected, the process proceeds to step S310.
  • Step S402 The information processing apparatus draws a trajectory for the display unit by applying the event information input in step S401, but as described above, an area where the drawing process corresponding to the actual trajectory of the pen is not in time, that is, The drawing delay area 23 described with reference to FIG. 1 is generated.
  • a learning process using trajectory information of a past drawn trajectory is performed. That is, a learning process for estimating a predicted trajectory is performed according to the process described with reference to FIGS. Specifically, as shown in FIG. 6, processing for detecting a similar locus similar to the immediately preceding locus 82 including the latest drawing position 81 of the drawn locus 80 from the drawn locus 80 is performed. In step S402, a learning process such as similar locus detection is executed.
  • Step S403 a prediction trajectory estimation process using the similar region detected in step S402 is executed.
  • k coordinates corresponding to the future frame u estimated according to a plurality (k) of similar trajectories are calculated, and an average position of these k coordinates is determined as a frame.
  • the process of setting the coordinate of the predicted trajectory at u is performed.
  • the coordinates constituting the predicted trajectory and the reliability index value SD of each predicted coordinate are calculated individually. That is, (x 0 , y 0, sd 0 ) to (x n , y n , sd n ) are calculated.
  • the reliability index value SD is a value corresponding to the standard deviation ⁇ of a plurality of estimated coordinates of each future frame calculated based on a plurality of similar trajectories, as described above with reference to FIG. is there.
  • Step S405 the value of the reliability index value SD [i] of the constituent coordinates (x i , y i ) of the predicted trajectory is compared with the threshold value ( ⁇ ⁇ s).
  • is a preset coefficient
  • s is a trajectory moving distance s between one frame in the immediately preceding trajectory 161 as shown in FIG.
  • step S405 SD [i] ⁇ ⁇ s It is determined whether or not the determination formula is satisfied.
  • the determination formula is satisfied, the constituent coordinates (x i , y i ) of the predicted trajectory have small standard deviations ⁇ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. It means that the variation is small. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, and the process proceeds to step S406.
  • the constituent coordinates (x i , y i ) of the predicted trajectory have large standard deviations ⁇ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. That is, the variation is large. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, and the process proceeds to step S407.
  • Step S406 If it is determined in step S405 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, a predicted trajectory drawing process is executed in step S406.
  • the predicted trajectory is drawn in a manner different from the eyeless trajectory to be drawn corresponding to the actual trajectory. That is, as described with reference to FIGS. 13 and 14, a predicted trajectory having at least one of color, transmittance, and thickness set differently from the drawn actual trajectory is drawn and displayed. Further, as described with reference to FIGS. 15 and 16, it may be configured to display by changing at least one of color, transmittance, and thickness according to the reliability.
  • the predicted trajectory is updated and displayed sequentially every time a new event occurs.
  • Step S407 On the other hand, if it is determined in step S405 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, the drawing process of the predicted trajectory is stopped in step S407. In this case, the predicted trajectory is not drawn. Alternatively, as described above with reference to FIG. 23, display control such as blurring processing is executed so that the predicted trajectory is not noticeable.
  • the process proceeds to step S3409.
  • step S409 it is determined whether a condition for stopping the drawing process of the predicted trajectory has been detected. This is a process for determining whether or not the detection states (A) and (B) described above with reference to FIG. 19 have been detected. That is, the following state is detected.
  • A Detection that the amount of decrease in the input device pressure value per unit time is equal to or greater than a specified threshold value [Thp].
  • B Detection that the input device movement amount per unit time is less than the specified threshold value [Thd].
  • step S409 for example, it is determined whether any of the above (A) and (B) is detected. When it determines with having detected, it progresses to S410 in step S409. If not detected, the process proceeds to step S411.
  • Step S410 If it is determined in step S409 that the situation (A) or (B) has been detected, the drawing of the predicted trajectory is stopped in step S410. This process corresponds to the process described above with reference to FIGS. After this process, the process proceeds to step S411.
  • Step S411 it is determined whether or not there is a next event input. If the next event input is not detected, the process ends. If the next event input is detected, the process proceeds to step S412.
  • step S1 the horizontal line L1 is written. Thereafter, in step S2, a vertical line L2 is written. It is assumed that the line L2 needs to be written as a line that substantially passes through the center position of the line L1.
  • the information processing apparatus includes an input unit 301, an output unit (display unit) 302, a sensor 303, a control unit (CPU or the like) 304, a memory (RAM) 305, and a memory (nonvolatile memory) 306. .
  • the input unit 301 is configured as an input unit that also serves as a display unit having a touch panel function, for example.
  • the touch panel function is not indispensable, and any touch panel function may be used as long as it has a configuration for inputting motion detection information of other input devices.
  • the input unit 301 may be used as a camera to detect user movement (such as a gesture) and use it as input information.
  • a mouse provided in a general PC may be set as an input device. Note that the input unit 301 is not limited to the function of inputting the movement information of the input device, and includes an input unit that performs various settings such as brightness adjustment and mode setting of the display unit 301.
  • the output unit 302 includes a display unit that displays trajectory information corresponding to movement information of the input device detected by the input unit 301.
  • a display unit that displays trajectory information corresponding to movement information of the input device detected by the input unit 301.
  • a touch panel display For example, a touch panel display.
  • the sensor 303 is a sensor that inputs information to be applied to the processing of the present disclosure, such as a sensor that detects the pressure on the touch pen or a sensor that detects the pressing area of the finger.
  • the control unit 304 includes, for example, a CPU configured by an electronic circuit, and functions as a data processing unit that executes processing according to the flowchart described in the above-described embodiment, for example.
  • the memory 305 is, for example, a RAM, and serves as a work area for executing processing according to the flowchart described in the embodiment, as well as a storage area for input device location information used by the user, various parameters applied to data processing, and the like. Used.
  • the memory 306 is a non-volatile memory, and is used as a storage area for storing, for example, a program for executing processing according to the flowchart described in the above-described embodiment, and a user's drawing trajectory.
  • kNN kNearest Neighbors
  • SVR Support Vector Regression
  • RVR Relevant Vector Regression
  • HMM Hidden Markov Model
  • the information processing apparatus may be configured to be integrated with a display device that performs trajectory display, but may be configured to be able to communicate with the display device.
  • a server which can communicate data via a network.
  • the input information of the input device operated by the user is transmitted to the server, and the server calculates the predicted trajectory based on the learning process described above. Further, the server executes a process of transmitting the calculation result to a display device on the user side, for example, a tablet terminal, and displaying a predicted locus on the display unit of the tablet terminal.
  • the technology disclosed in this specification can take the following configurations. (1) having a data processing unit that performs display control processing of a locus according to input position information generated by user operation input; The data processing unit An actual trajectory identified based on the input position information; A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process; An information processing apparatus that executes display control for displaying the image in a different manner.
  • the data processing unit displays at least one of the color, the transparency, or the thickness of the predicted locus different from the actual locus, and displays the predicted locus in a less conspicuous manner than the actual locus.
  • the information processing apparatus according to any one of 1) to (3).
  • the data processing unit changes at least one of the color, transparency, or thickness of the predicted trajectory according to the reliability of the predicted trajectory, and the predicted trajectory is reduced as the reliability decreases.
  • the information processing apparatus according to (5) wherein the information processing apparatus is displayed as a less conspicuous aspect.
  • the input position information is input position information obtained by detecting a contact position of the input object with respect to the touch panel, and the data processing unit acquires a pressure value representing a pressure at which the input object contacts the touch panel, and the pressure
  • the information processing apparatus according to any one of (1) to (9), wherein a display mode of the predicted trajectory is determined according to a value.
  • the data processing unit acquires a movement amount per unit time of the input position information, and determines a display mode of the predicted trajectory according to the movement amount according to any one of (1) to (11).
  • the data processing unit detects, as the predicted trajectory calculation process, a plurality of similar trajectories similar to the trajectory immediately before the calculation area of the predicted trajectory from the past trajectory, and performs each similarity based on the detected plurality of similar trajectories.
  • the information processing apparatus according to any one of (1) to (14), wherein a subsequent trajectory of the trajectory is estimated, and a predicted trajectory is calculated by an averaging process or a weighted addition process of the estimated plurality of subsequent trajectories.
  • the data processing unit performs a trajectory display control process according to the input position information generated by the user's operation input, The data processing unit, in the display control process, An actual trajectory identified based on the input position information; A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process; An information processing method for executing display control for displaying in a different manner.
  • a program for executing information processing in an information processing device Causing the data processing unit to perform display control processing of the locus according to the input position information generated by the user's operation input; In the display control process, An actual trajectory identified based on the input position information; A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process; A program for executing display control for displaying the image in a different manner.
  • the series of processes described in the specification can be executed by hardware, software, or a combined configuration of both.
  • the program recording the processing sequence is installed in a memory in a computer incorporated in dedicated hardware and executed, or the program is executed on a general-purpose computer capable of executing various processing. It can be installed and run.
  • the program can be recorded in advance on a recording medium.
  • the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed on a recording medium such as a built-in hard disk.
  • the various processes described in the specification are not only executed in time series according to the description, but may be executed in parallel or individually according to the processing capability of the apparatus that executes the processes or as necessary.
  • the system is a logical set configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same casing.
  • display control of a predicted trajectory predicted according to past trajectory information is realized. Specifically, it has a data processing unit that performs a display control process of the trajectory according to the input position information, and the data processing unit displays the trajectory after the trajectory calculation process according to the input position information is completed as an actual trajectory, A trajectory in an area where the trajectory calculation process according to the input position information is not completed is estimated as a predicted trajectory, and the estimated predicted trajectory is displayed in a manner different from the displayed actual trajectory. For example, at least one of the color, transparency, and thickness of the predicted trajectory is displayed in a manner different from the actual trajectory, and the predicted trajectory is displayed in a manner that is less conspicuous than the actual trajectory. Further, a process for hiding the predicted trajectory is performed according to the reliability. With this configuration, display control of a predicted trajectory predicted according to past trajectory information is realized.

Abstract

The present invention controls the display of a prediction locus predicted in accordance with legacy locus information. The present invention has a data processing unit for processing the control of locus display corresponding to input position information, the data processing unit displaying, as an actual locus, a locus for which a locus calculation process corresponding to the input position information has been completed, estimating, as a prediction locus, a locus in a region for which the locus calculation process corresponding to the input position information has not been completed, and displaying the estimated prediction locus in a manner different from the already displayed actual locus. For example, at least one of the color, permeability, and thickness of the prediction locus is displayed in a manner different from that of the actual locus so that the prediction locus is displayed in a manner less conspicuous than the actual locus. Also, the prediction locus is processed so as to be hidden in accordance with reliability.

Description

情報処理装置、および情報処理方法、並びにプログラムInformation processing apparatus, information processing method, and program
 本開示は、情報処理装置、および情報処理方法、並びにプログラムに関する。具体的には、例えばタッチパネル式ディスプレイにおける描画軌跡の表示において、実軌跡と予測軌跡とを容易に判別可能な態様で表示する表示制御を行う情報処理装置、および情報処理方法、並びにプログラムに関する。 The present disclosure relates to an information processing apparatus, an information processing method, and a program. Specifically, for example, the present invention relates to an information processing apparatus, an information processing method, and a program that perform display control for displaying a real trajectory and a predicted trajectory in an easily distinguishable manner in displaying a drawing trajectory on a touch panel display.
 昨今、タッチパネル型の入力表示装置が多く利用されている。タッチパネル型の入力表示装置は、例えば液晶ディスプレイ等を用いた情報表示処理とともに、ディスプレイ表面に対する指やペンの接触による情報入力も可能とした装置である。
 ディスプレイ表面に対する指やペンによる接触位置を検知し、検知位置に応じた処理、例えば描画処理などを可能としている。なお、ペンや指の位置の検出方式としては電磁誘導方式や静電容量方式等、様々な方式がある。
In recent years, a touch panel type input display device is widely used. A touch panel type input display device is a device that enables information input by touching a display surface with a finger or a pen as well as information display processing using, for example, a liquid crystal display.
A contact position with a finger or a pen on the display surface is detected, and processing according to the detected position, for example, drawing processing is enabled. There are various methods such as an electromagnetic induction method and a capacitance method for detecting the position of the pen or finger.
 しかし、このような装置において、例えば入力デバイスとしてペンを用いてペンの軌跡に応じた線を描画する場合、ペン先の位置と、ディスプレイ上の表示線の先端位置に「ズレ」が発生することがある。
 これは、ペン位置の検出処理から、線の描画処理までの処理時間の遅延等に起因するものであり、高速にペンを移動させた場合により顕著となる。
However, in such an apparatus, for example, when a line is drawn according to the pen trajectory using a pen as an input device, a “deviation” occurs between the position of the pen tip and the tip position of the display line on the display. There is.
This is caused by a delay in processing time from the pen position detection process to the line drawing process, and becomes more prominent when the pen is moved at high speed.
 このズレを解消するための構成を開示した従来技術として、例えば特許文献1(特開平09-190275号公報)がある。特許文献1には、静的なフィルタ(関数)を用いて、処理の終わっていない先の軌跡を予測して描画する構成を開示している。 For example, Patent Document 1 (Japanese Patent Laid-Open No. 09-190275) is known as a prior art that discloses a configuration for eliminating this deviation. Patent Document 1 discloses a configuration in which a static trajectory (function) is used to predict and draw a locus that has not been processed.
 この開示手法は、描画が完了している軌跡中の最新の軌跡領域を用いて線形近似直線を生成し、生成した線形近似直線を未処理(未描画)部分の軌跡として延長して表示するものである。なお、この方式では、近似の次数設定の変更や、三角関数などを用いることで、近似曲線を生成することも可能となる。 In this disclosed method, a linear approximation line is generated using the latest locus region in a locus for which drawing has been completed, and the generated linear approximation line is extended and displayed as an unprocessed (undrawn) portion locus. It is. In this method, it is also possible to generate an approximate curve by changing the approximate order setting or using a trigonometric function.
 しかし、この開示手法は、未処理領域の直前にある描画軌跡のみを用いた近似処理によって先の軌跡を予測する処理を実行しているにすぎず、例えば、急な速度変化や方向転換などの際には、予測点のオーバーシュートや実際のペン先の位置と予測点の大きな乖離が発生することがある。 However, this disclosed technique only performs a process of predicting the previous trajectory by an approximation process using only the drawing trajectory immediately before the unprocessed area. For example, a sudden speed change or a direction change is performed. In some cases, an overshoot of the predicted point or a large discrepancy between the actual pen tip position and the predicted point may occur.
 また、実際の軌跡と予測軌跡を同一の態様で表示すると、誤った予測軌跡が表示された場合、ユーザ(描画者)が混乱してしまう場合がある。 Also, if the actual trajectory and the predicted trajectory are displayed in the same manner, the user (drawer) may be confused if an incorrect predicted trajectory is displayed.
特開平09-190275号公報JP 09-190275 A
 本開示は、例えば上記問題点に鑑みてなされたものであり、実軌跡と予測軌跡とを明確に区別して表示することで、ユーザに混戦を起こさせずにスムーズな描画処理を行なわせることを可能とした情報処理装置、および情報処理方法、並びにプログラムを提供することを目的とする。 The present disclosure has been made in view of the above-described problems, for example, by clearly distinguishing and displaying the actual trajectory and the predicted trajectory so that the user can perform smooth drawing processing without causing a war. It is an object to provide an information processing apparatus, an information processing method, and a program that are made possible.
 本開示の第1の側面は、
 ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行なうデータ処理部を有し、
 前記データ処理部は、
 前記入力位置情報に基づいて特定される実軌跡と、
 前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
 を異なる態様で表示させる表示制御を実行する情報処理装置にある。
The first aspect of the present disclosure is:
A data processing unit that performs display control processing of a locus according to input position information generated by user operation input;
The data processing unit
An actual trajectory identified based on the input position information;
A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
Is in an information processing apparatus that performs display control for displaying the image in a different manner.
 さらに、本開示の情報処理装置の一実施態様において、前記予測軌跡は過去に特定された実軌跡に基づいて予測される軌跡である。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the predicted trajectory is a trajectory predicted based on an actual trajectory specified in the past.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを、前記実軌跡と異なる態様で表示させる。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit displays at least one of a color, a transparency, and a thickness of the predicted locus in a manner different from the actual locus.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを、前記実軌跡と異なる態様とし、予測軌跡を実軌跡より目立たない態様で表示させる。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit sets a color, transparency, or thickness of the predicted trajectory to be different from the actual trajectory, It is displayed in a mode that is less conspicuous than the actual trajectory.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の信頼度に応じて、前記予測軌跡の表示態様を決定する。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit determines a display mode of the predicted trajectory according to the reliability of the predicted trajectory.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の信頼度に応じて、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを決定して表示させる。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit determines at least one of a color, a transparency, or a thickness of the predicted trajectory according to the reliability of the predicted trajectory. To display.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の信頼度に応じて、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを変更し、信頼度が低下するに従い、予測軌跡をより目立たない態様として表示させる。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit changes at least one of a color, transparency, or thickness of the predicted trajectory according to the reliability of the predicted trajectory. As the reliability decreases, the predicted trajectory is displayed in a less conspicuous manner.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の信頼度が規定しきい値より低いと判定した場合、予測軌跡が表示されないように表示部を制御する。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit controls the display unit so that the predicted trajectory is not displayed when the reliability of the predicted trajectory is determined to be lower than a predetermined threshold. .
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡の信頼度が規定しきい値より低いと判定した場合、予測軌跡の表示を、目立たない表示態様に変更する。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, when the data processing unit determines that the reliability of the predicted trajectory is lower than a predetermined threshold value, the display of the predicted trajectory is changed to an inconspicuous display mode. To do.
 さらに、本開示の情報処理装置の一実施態様において、前記入力位置情報は、タッチパネルに対する入力オブジェクトの接触位置検出により得られる入力位置情報であり、前記データ処理部は、前記入力オブジェクトが前記タッチパネル接する圧力を表す圧力値を取得し、前記圧力値に応じて前記予測軌跡の表示態様を決定する。 Furthermore, in one embodiment of the information processing apparatus according to the present disclosure, the input position information is input position information obtained by detecting a contact position of the input object with respect to the touch panel, and the data processing unit is configured so that the input object touches the touch panel. A pressure value representing a pressure is acquired, and a display mode of the predicted locus is determined according to the pressure value.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記圧力値の低下が規定しきい値より大きな低下量となったことを検出した場合、前記予測軌跡が表示されないように表示部を制御する。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit may not display the predicted trajectory when the data processing unit detects that the pressure value decrease is greater than a predetermined threshold value. To control the display.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記入力位置情報の単位時間あたりの移動量を取得し、前記移動量に応じて前記予測軌跡の表示態様を決定する。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit acquires a movement amount per unit time of the input position information, and determines a display mode of the predicted locus according to the movement amount. .
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記移動量が規定しきい値より未満となったことを検出した場合、前記予測軌跡の表示を停止する。 Furthermore, in one embodiment of the information processing apparatus of the present disclosure, the data processing unit stops displaying the predicted trajectory when detecting that the movement amount is less than a predetermined threshold value.
 さらに、本開示の情報処理装置の一実施態様において、前記入力オブジェクトはスタイラスである。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the input object is a stylus.
 さらに、本開示の情報処理装置の一実施態様において、前記データ処理部は、前記予測軌跡算出処理として、予測軌跡の算出領域の直前軌跡と類似する類似軌跡を過去の軌跡上から複数検出し、検出した複数の類似軌跡に基づいて各類似軌跡の後続軌跡を推定し、推定した複数の後続軌跡の平均化処理または重み込み付け加算処理により、予測軌跡を算出する。 Furthermore, in an embodiment of the information processing apparatus according to the present disclosure, the data processing unit detects a plurality of similar trajectories similar to the immediately preceding trajectory of the predicted trajectory calculation area as the predicted trajectory calculation process from past trajectories, A subsequent trajectory of each similar trajectory is estimated based on the detected plurality of similar trajectories, and a predicted trajectory is calculated by averaging or weighting addition processing of the estimated plurality of subsequent trajectories.
 さらに、本開示の第2の側面は、
 情報処理装置において実行する情報処理方法であり、
 データ処理部が、ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行ない、
 前記データ処理部は、前記表示制御処理において、
 前記入力位置情報に基づいて特定される実軌跡と、
 前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
 を異なる態様で表示させる表示制御を実行する情報処理方法にある。
Furthermore, the second aspect of the present disclosure is:
An information processing method executed in an information processing apparatus,
The data processing unit performs a trajectory display control process according to the input position information generated by the user's operation input,
The data processing unit, in the display control process,
An actual trajectory identified based on the input position information;
A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
There is an information processing method for executing display control for displaying the image in a different manner.
 さらに、本開示の第3の側面は、
 情報処理装置において情報処理を実行させるプログラムであり、
 データ処理部に、ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行なわせ、
 前記表示制御処理において、
 前記入力位置情報に基づいて特定される実軌跡と、
 前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
 を異なる態様で表示させる表示制御を実行させるプログラムにある。
Furthermore, the third aspect of the present disclosure is:
A program for executing information processing in an information processing apparatus;
Causing the data processing unit to perform display control processing of the locus according to the input position information generated by the user's operation input;
In the display control process,
An actual trajectory identified based on the input position information;
A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
Is in a program for executing display control for displaying the image in a different manner.
 なお、本開示のプログラムは、例えば、様々なプログラム・コードを実行可能な画像処理装置やコンピュータ・システムに対して、コンピュータ可読な形式で提供する記憶媒体、通信媒体によって提供可能なプログラムである。このようなプログラムをコンピュータ可読な形式で提供することにより、情報処理装置やコンピュータ・システム上でプログラムに応じた処理が実現される。 Note that the program of the present disclosure is a program that can be provided by, for example, a storage medium or a communication medium provided in a computer-readable format to an image processing apparatus or a computer system that can execute various program codes. By providing such a program in a computer-readable format, processing corresponding to the program is realized on the information processing apparatus or the computer system.
 本開示のさらに他の目的、特徴や利点は、後述する本発明の実施例や添付する図面に基づくより詳細な説明によって明らかになるであろう。なお、本明細書においてシステムとは、複数の装置の論理的集合構成であり、各構成の装置が同一筐体内にあるものには限らない。 Further objects, features, and advantages of the present disclosure will become apparent from a more detailed description based on embodiments of the present invention described later and the accompanying drawings. In this specification, the system is a logical set configuration of a plurality of devices, and is not limited to one in which the devices of each configuration are in the same casing.
 本開示の一実施例の構成によれば、過去の軌跡情報に従って予測した予測軌跡の表示制御が実現される。
 具体的には、入力位置情報に従った軌跡の表示制御処理を行なうデータ処理部を有し、データ処理部は、入力位置情報に従った軌跡算出処理が完了した軌跡を実軌跡として表示し、入力位置情報に従った軌跡算出処理が完了していない領域の軌跡を予測軌跡として推定し、推定した予測軌跡を、表示済みの実軌跡と異なる態様で表示する。例えば、予測軌跡の色、または透過度、または太さの少なくともいずれかを、実軌跡と異なる態様で表示し、予測軌跡を実軌跡より目立たない態様で表示する。また信頼度に応じて予測軌跡を非表示にする処理を行なう。
 本構成により、過去の軌跡情報に従って予測した予測軌跡の表示制御が実現される。
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また付加的な効果があってもよい。
According to the configuration of an embodiment of the present disclosure, display control of a predicted trajectory predicted according to past trajectory information is realized.
Specifically, it has a data processing unit that performs a display control process of the trajectory according to the input position information, and the data processing unit displays the trajectory after the trajectory calculation process according to the input position information is completed as an actual trajectory, A trajectory in an area where the trajectory calculation process according to the input position information is not completed is estimated as a predicted trajectory, and the estimated predicted trajectory is displayed in a manner different from the displayed actual trajectory. For example, at least one of the color, transparency, and thickness of the predicted trajectory is displayed in a manner different from the actual trajectory, and the predicted trajectory is displayed in a manner that is less conspicuous than the actual trajectory. Further, a process for hiding the predicted trajectory is performed according to the reliability.
With this configuration, display control of a predicted trajectory predicted according to past trajectory information is realized.
Note that the effects described in the present specification are merely examples and are not limited, and may have additional effects.
入力表示装置に、ペン型の入力デバイスを用いて線分(ライン)を描画する場合の問題点を説明する図である。It is a figure explaining the problem in case a line segment (line) is drawn on an input display apparatus using a pen-type input device. 予測軌跡の描画処理における予測誤りの発生例について説明する図である。It is a figure explaining the example of generation | occurrence | production of the prediction error in the drawing process of a prediction locus. 本開示の情報処理装置の実行する軌跡の予測処理について説明する図である。It is a figure explaining the prediction process of the locus | trajectory which the information processing apparatus of this indication performs. 本開示の情報処理装置の実行する軌跡の予測処理の概要を説明する図である。It is a figure explaining the outline | summary of the prediction process of the locus | trajectory which the information processing apparatus of this indication performs. kNN法を適用した予測軌跡の推定と描画処理シーケンスについて説明するフローチャートを示す図である。It is a figure which shows the flowchart explaining the estimation of the prediction locus | trajectory which applied the kNN method, and the drawing process sequence. kNN法を適用した予測軌跡の推定と描画処理の具体例について説明する図である。It is a figure explaining the example of the estimation of the prediction locus | trajectory which applied kNN method, and a drawing process. 予測軌跡の推定に適用する特徴量について説明する図である。It is a figure explaining the feature-value applied to estimation of a prediction locus. 類似軌跡の抽出処理例について説明する図である。It is a figure explaining the extraction process example of a similar locus | trajectory. 予測軌跡の信頼度について説明する図である。It is a figure explaining the reliability of a prediction locus. 標準偏差算出処理と処理結果に応じた処理変更を伴う予測軌跡の算出描画処理のシーケンスについて説明するフローチャートを示す図である。It is a figure which shows the flowchart explaining the sequence of the calculation drawing process of the prediction locus | trajectory accompanied by the process change according to a standard deviation calculation process and a process result. 予測軌跡の決定処理に適用する複数(k本)の補助予測軌跡と補助予測軌跡の各座標位置の標準偏差の一例を示す図である。It is a figure which shows an example of the standard deviation of each coordinate position of multiple (k pieces) auxiliary prediction locus | trajectory applied to the determination process of a prediction locus | trajectory, and an auxiliary prediction locus | trajectory. 各未来フレームU=t+1,t+2,t+3の信頼度の算出例を示す図である。It is a figure which shows the example of calculation of the reliability of each future frame U = t + 1, t + 2, t + 3. 予測軌跡の具体的な表示制御例について説明する図である。It is a figure explaining the example of concrete display control of a prediction locus. 予測軌跡の具体的な表示制御例について説明する図である。It is a figure explaining the example of concrete display control of a prediction locus. 予測軌跡の信頼度に応じた具体的な表示制御例について説明する図である。It is a figure explaining the specific example of display control according to the reliability of a prediction locus. 予測軌跡の信頼度に応じた具体的な表示制御例について説明する図である。It is a figure explaining the specific example of display control according to the reliability of a prediction locus. 表示した予測軌跡を、次の描画フレームでは消去して新たな予測軌跡を描画する処理例について説明する図である。It is a figure explaining the example of a process which erases the displayed prediction locus in the next drawing frame, and draws a new prediction locus. 信頼値指標値が所定のしきい値を下回っていない場合、その予測点に至る直前の予測軌跡を描画しない処理について説明する図である。It is a figure explaining the process which does not draw the prediction locus | trajectory just before reaching the prediction point, when the reliability value index value is not less than the predetermined threshold value. 状況に応じて予測軌跡の描画をオン/オフ、すなわち表示または非表示の切り替えを行う処理例について説明する図である。It is a figure explaining the example of a process which performs drawing of a prediction locus | trajectory according to a condition, ie, switching display or non-display. 単位時間あたりの入力デバイス圧力値の低下量が規定しきい値[Thp]以上であることの検出処理について説明する図である。It is a figure explaining the detection process that the fall amount of the input device pressure value per unit time is more than regulation threshold value [Thp]. 単位時間あたりの入力デバイス移動量が規定しきい値[Thd]未満であることを検出する処理例について説明する図である。It is a figure explaining the example of a process which detects that the input device movement amount per unit time is less than regulation threshold value [Thd]. 予測軌跡が実軌跡から外れた場合、予測軌跡の表示態様を変更して予測軌跡を目立たなくする制御を行う処理例について説明する図である。It is a figure explaining the example of a process which performs control which changes the display mode of a prediction locus, and makes a prediction locus inconspicuous when a prediction locus deviates from an actual locus. 予測軌跡の表示制御の処理シーケンスについて説明するフローチャートを示す図である。It is a figure which shows the flowchart explaining the process sequence of display control of a prediction locus | trajectory. 予測軌跡の表示制御の処理シーケンスについて説明するフローチャートを示す図である。It is a figure which shows the flowchart explaining the process sequence of display control of a prediction locus | trajectory. 予測軌跡の表示による効果の一例について説明する図である。It is a figure explaining an example of the effect by the display of a prediction locus. 本開示の情報処理装置のハードウェア構成例について説明する図である。It is a figure explaining the hardware structural example of the information processing apparatus of this indication.
 以下、図面を参照しながら本開示の情報処理装置、および情報処理方法、並びにプログラムの詳細について説明する。なお、説明は以下の項目に従って行う。
  1.描画処理における問題点について
  2.本開示の情報処理装置の実行する軌跡の予測処理について
  3.描画軌跡の座標情報を用いた予測処理例について
  3-1.類似判定処理について
  3-2.予測軌跡の決定処理について
  4.予測軌跡の信頼度に応じた処理例について
  5.入力デバイスの圧力(Pressure)検出情報を適用した処理について
  6.予測軌跡の算出、描画処理の変形例について
  7.予測軌跡の描画態様を制御する実施例について
  8.予測軌跡の表示制御の処理シーケンスについて
  9.高精度な予測軌跡の表示による効果の一例について
 10.本開示の情報処理装置のハードウェア構成例について
 11.本開示の構成のまとめ
Hereinafter, the details of the information processing apparatus, the information processing method, and the program of the present disclosure will be described with reference to the drawings. The description will be made according to the following items.
1. 1. Problems in the drawing process 2. Trajectory prediction processing executed by the information processing apparatus of the present disclosure 3. Example of prediction process using coordinate information of drawing trajectory 3-1. Similarity determination processing 3-2. 3. Prediction locus determination process 4. Example of processing according to the reliability of the predicted trajectory 5. Process applying pressure detection information of input device 6. Modification of predicted trajectory calculation and drawing processing 7. Example of controlling the drawing mode of the predicted trajectory 8. Processing sequence of display control of predicted trajectory An example of the effect of displaying a highly accurate predicted trajectory 10. 10. Hardware configuration example of information processing apparatus of present disclosure Summary of composition of this disclosure
  [1.描画処理における問題点について]
 まず、入力表示装置に対してタッチペン等の入力デバイス(入力オブジェクト)を用いて描画を行う場合の問題点について説明する。
 図1は、タッチパネル型の入力表示装置10に対して、ペン型の入力デバイス11を用いて線分(ライン)を描画している例を示した図である。
 入力デバイス11は矢印方向に順次移動し、入力デバイス11の軌跡に従った軌跡が描画される。すなわち、入力デバイスの軌跡としてのライン(線)が表示される。
 しかし、入力デバイス11の移動速度が速い場合、軌跡の描画処理が入力デバイス11の移動速度に追随できず、軌跡線分の表示が遅れる場合が発生する。
[1. About problems in drawing processing]
First, a description will be given of problems in the case where drawing is performed on an input display device using an input device (input object) such as a touch pen.
FIG. 1 is a diagram showing an example in which a line segment (line) is drawn on a touch panel type input display device 10 using a pen type input device 11.
The input device 11 sequentially moves in the direction of the arrow, and a locus according to the locus of the input device 11 is drawn. That is, a line (line) as a locus of the input device is displayed.
However, when the moving speed of the input device 11 is fast, the locus drawing process cannot follow the moving speed of the input device 11, and the display of the locus line segment may be delayed.
 図に示す描画済み軌跡21は、入力デバイスの軌跡に従ってディスプレイ上に表示されたラインである。この描画済み軌跡21の最新描画位置22は、入力デバイス11の現在のペン先位置15に一致せず、入力デバイス11の所定時間前の軌跡の位置にある。図に示す描画遅延領域23は既にペンが既に所定の軌跡に従って移動済みの領域であるが、その軌跡に対応したライン(線分)の表示が間に合わず、空白領域となっている。描画処理の遅延によってこのような空白領域が発生する。 The drawn trajectory 21 shown in the figure is a line displayed on the display according to the trajectory of the input device. The latest drawing position 22 of the drawn trajectory 21 does not coincide with the current pen tip position 15 of the input device 11, and is at the position of the trajectory of the input device 11 before a predetermined time. The drawing delay area 23 shown in the figure is an area where the pen has already been moved according to a predetermined locus, but a line (line segment) corresponding to the locus is not displayed in time, and is a blank area. Such a blank area occurs due to a delay in the drawing process.
 このような空白領域を解消するための処理として、前述の[背景技術]の欄で説明した静的フィルタ(関数)を用いた処理がある。これは、例えば図1に示す描画済み軌跡21中の最新描画位置22を含む所定長のライン領域を利用した線形近似等により、その先の線を予測して描画する処理である。 As a process for eliminating such a blank area, there is a process using the static filter (function) described in the above [Background Art] section. This is a process of predicting and drawing the line ahead by, for example, linear approximation using a line area of a predetermined length including the latest drawing position 22 in the drawn locus 21 shown in FIG.
 すなわち、前述したように描画済み軌跡の最新の描画部分から線形近似直線を生成して最新描画位置22の先に延長して予測される軌跡に対応するラインを描画するものである。
 しかし、このような処理を行なうと、実際のペンの軌跡とは異なる予測軌跡が生成され、誤った軌跡を描画してしまう場合がある。
In other words, as described above, a linear approximation straight line is generated from the latest drawing portion of the drawn locus, and the line corresponding to the predicted locus is drawn by extending beyond the latest drawing position 22.
However, when such a process is performed, a predicted locus different from the actual pen locus may be generated, and an incorrect locus may be drawn.
 このような予測誤りの発生例について図2を参照して説明する。
 図2には2つの予測誤りの例を示している。
 (a)予測誤り例1は、ペンの軌跡が突然カーブし、ペンの進行方向が大きく変動した場合のオーバーシュートによる予測誤りの例である。
 描画済みライン31の最新描画位置32において、図に示すペン軌跡34に示すようにペンは急速に進行方向を変化させている。
An example of occurrence of such a prediction error will be described with reference to FIG.
FIG. 2 shows two examples of prediction errors.
(A) Prediction error example 1 is an example of a prediction error due to overshoot when the pen trajectory suddenly curves and the pen traveling direction fluctuates greatly.
At the latest drawing position 32 of the drawn line 31, as shown by the pen locus 34 shown in the drawing, the pen rapidly changes its traveling direction.
 しかし、前述の静的フィルタ(関数)を用いた予測処理を行なう場合、この予測に用いるデータは、描画済み軌跡31中の最新描画位置32を含む所定長の描画済み軌跡のみとなる。すなわち図に示す近似適用軌跡33のみが予測に適用されるデータとなる。 However, when the prediction process using the static filter (function) described above is performed, the data used for the prediction is only a drawn trajectory having a predetermined length including the latest drawing position 32 in the drawn trajectory 31. That is, only the approximate application trajectory 33 shown in the figure is data applied to the prediction.
 例えば、この近似適用軌跡33の線方向に応じて伸ばしたラインが予測軌跡35として設定され、表示されることになる。
 この結果、図に示すように、実際のペン軌跡34とは全く異なる位置に予測軌跡35が設定される。
For example, a line extended according to the line direction of the approximate application locus 33 is set as the predicted locus 35 and displayed.
As a result, as shown in the figure, a predicted track 35 is set at a position completely different from the actual pen track 34.
 図2に示すもう1つの予測誤り例は、(b)予測誤り例2であり、入力デバイスであるペンが突然停止した場合の例である。
 描画済み軌跡41の最新描画位置42において、ペンは移動を止め、停止した場合の処理例である。
Another prediction error example shown in FIG. 2 is (b) prediction error example 2, which is an example in a case where a pen as an input device suddenly stops.
This is an example of processing when the pen stops moving and stops at the latest drawing position 42 of the drawn trajectory 41.
 前述の静的フィルタ(関数)を用いた予測処理を行なう場合、この予測に用いるデータは、描画済み軌跡41中の最新描画位置42を含む所定長の描画済みラインである。図に示す近似適用軌跡43が予測に適用されるデータとなる。 When the prediction process using the static filter (function) described above is performed, the data used for the prediction is a drawn line of a predetermined length including the latest drawing position 42 in the drawn locus 41. The approximate application locus 43 shown in the figure is data applied to the prediction.
 この近似適用軌跡43の線方向に応じて延長したラインが予測軌跡45として設定される。この結果、図に示すように、実際のペン位置より先に予測軌跡45が設定されてしまう。 A line extended according to the line direction of the approximate application locus 43 is set as the predicted locus 45. As a result, as shown in the figure, the predicted trajectory 45 is set before the actual pen position.
 このように、静的フィルタ(関数)を用いた線分予測処理は、描画済み軌跡の直前の線分を近似処理に適用する処理であるため、ペンの移動態様が近似適用軌跡と異なると、予測結果がペンの実際の軌跡と全く異なる結果となってしまい、誤った予測軌跡が表示されることになる。 As described above, the line segment prediction process using the static filter (function) is a process of applying the line segment immediately before the drawn trajectory to the approximation process. Therefore, if the pen movement mode is different from the approximate application trajectory, The prediction result is completely different from the actual locus of the pen, and an incorrect prediction locus is displayed.
  [2.本開示の情報処理装置の実行する軌跡の予測処理について]
 次に、本開示の情報処理装置の実行する軌跡の予測処理例について説明する。
 図3は、本開示の情報処理装置の軌跡予測処理の一例を示す図である。本開示の情報処理装置の処理を適用することで、描画済み軌跡51の最新描画位置52からペン位置54までの予測ラインである予測軌跡53を高精度に推定し表示することが可能となる。この高精度な予測軌跡53の推定および表示処理により、ユーザの感じる遅延感を低減し、違和感のない描画処理が可能となる。
[2. Trajectory Prediction Process Performed by Information Processing Device of Present Disclosure]
Next, a trajectory prediction process example performed by the information processing apparatus according to the present disclosure will be described.
FIG. 3 is a diagram illustrating an example of a trajectory prediction process of the information processing apparatus according to the present disclosure. By applying the processing of the information processing apparatus of the present disclosure, it is possible to accurately estimate and display the predicted trajectory 53 that is a prediction line from the latest drawing position 52 of the drawn trajectory 51 to the pen position 54. This highly accurate estimation and display process of the predicted trajectory 53 reduces the feeling of delay felt by the user and enables drawing processing without a sense of incongruity.
 図4は、本開示の情報処理装置の実行する軌跡の予測処理の概要を説明する図である。
 本実施例では、描画済み軌跡71の構成情報を適用した学習処理(機械学習)を実行し、学習結果を適用して予測軌跡73を設定して描画処理を行う。
 予測軌跡73は、最新描画位置72の先に描画する。
 本実施例では、最新描画位置直前の描画済み軌跡の情報のみならず、さらに過去の描画軌跡の情報も用いた学習処理によって予測軌跡を推定する。この処理によって、先に図2を参照して説明したような予測誤りを低減し、高精度な予測を実現する。
FIG. 4 is a diagram illustrating an outline of a trajectory prediction process performed by the information processing apparatus according to the present disclosure.
In the present embodiment, a learning process (machine learning) to which the configuration information of the drawn trajectory 71 is applied is executed, and a predicted trajectory 73 is set by applying the learning result to perform the drawing process.
The predicted locus 73 is drawn ahead of the latest drawing position 72.
In the present embodiment, the predicted trajectory is estimated by a learning process using not only the information of the drawn trajectory immediately before the latest drawing position but also the information of the past drawing trajectory. This process reduces prediction errors as described above with reference to FIG. 2 and realizes highly accurate prediction.
 本開示の情報処理装置が実行する軌跡の予測処理は、先に説明した従来の固定的な静的フィルタ(関数)を用いた静的予測とは異なる動的予測処理である。
 なお、静的予測では、描画済み軌跡の直前の線分を近似処理に適用する処理であり、さらに過去の軌跡情報が予測軌跡に影響を与えない。従って、予測に適用する直前軌跡が一定の場合、予測軌跡は一定の軌跡となる。
 一方、動的予測では、描画済み軌跡の直前の線分以外のさらに過去の軌跡情報を参考とした軌跡予測を行うものであり、これらの過去の軌跡が予測軌跡に影響を与える。従って、予測に適用する直前軌跡が一定の場合でも、過去の軌跡が異なれば予測軌跡は異なる軌跡となる。
 なお、過去の軌跡には直前軌跡以前の軌跡が含まれ、表示部に表示されていない軌跡も含む。
The trajectory prediction process executed by the information processing apparatus of the present disclosure is a dynamic prediction process different from the static prediction using the conventional fixed static filter (function) described above.
In the static prediction, the line segment immediately before the drawn trajectory is applied to the approximation process, and the past trajectory information does not affect the predicted trajectory. Therefore, when the immediately preceding trajectory applied to prediction is constant, the predicted trajectory is a constant trajectory.
On the other hand, in dynamic prediction, trajectory prediction is performed with reference to past trajectory information other than the line segment immediately before the drawn trajectory, and these past trajectories affect the predicted trajectory. Therefore, even if the previous trajectory applied to the prediction is constant, the predicted trajectory is different if the past trajectory is different.
The past trajectory includes a trajectory before the previous trajectory and includes a trajectory that is not displayed on the display unit.
 以下、本実施例において適用する学習処理の一例として、
 『k Nearest Neighbors(kNN法),k近傍法』
 を適用した学習処理による予測軌跡の推定処理について説明する。
Hereinafter, as an example of the learning process applied in the present embodiment,
"K Nearest Neighbors (kNN method), k neighborhood method"
A process for estimating a predicted trajectory by a learning process to which is applied will be described.
 図5は、kNN法を適用した予測軌跡の推定と描画処理シーケンスについて説明するフローチャートを示す図である。
 なお、このフローチャートに示す処理は、本開示の情報処理装置のデータ処理部、具体的には、例えばプログラム実行機能を有するCPU等からなるデータ処理部の制御の下で実行される。プログラムは、例えば情報処理装置のメモリに格納される。
FIG. 5 is a flowchart illustrating a prediction trajectory estimation and drawing process sequence to which the kNN method is applied.
Note that the processing shown in this flowchart is executed under the control of a data processing unit of the information processing apparatus according to the present disclosure, specifically, a data processing unit including, for example, a CPU having a program execution function. The program is stored, for example, in the memory of the information processing apparatus.
 まず、図5に示すフローの各ステップの処理について順次説明し、その後、図6を参照して、各ステップの具体的な処理例について説明する。
 なお、情報処理装置は、専用ペン(スタイラス)等の入力デバイス(入力オブジェクト)の軌跡に応じた線(ライン)を描画する処理を行なう。
 しかし、図1等を参照して説明したように、入力デバイス(専用ペン)の現在位置の直前領域に描画処理の間に合わない描画遅延領域が発生する。図5に示すフローは、この描画遅延領域の入力デバイスの軌跡を推定して推定した軌跡に沿った線(ライン)を予測軌跡として描画する処理のシーケンスである。例えば、図4に示す予測軌跡73の推定描画処理である。
First, the processing of each step of the flow shown in FIG. 5 will be described in order, and then a specific processing example of each step will be described with reference to FIG.
The information processing apparatus performs a process of drawing a line according to the locus of an input device (input object) such as a dedicated pen (stylus).
However, as described with reference to FIG. 1 and the like, a drawing delay area that does not coincide with the drawing process occurs in the area immediately before the current position of the input device (dedicated pen). The flow shown in FIG. 5 is a sequence of processing for drawing a line (line) along the estimated locus by drawing the locus of the input device in the drawing delay area as a predicted locus. For example, it is an estimated drawing process of the predicted trajectory 73 shown in FIG.
  (ステップS101)
 まず、ステップS101において、描画済み軌跡から、最新描画軌跡(直前軌跡)と類似する複数個(k個)の軌跡領域(類似軌跡)を検索する。
(Step S101)
First, in step S101, a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawn trajectory (immediate trajectory) are searched from the drawn trajectories.
 描画済み軌跡とは、入力デバイスの軌跡解析が完了し、軌跡に応じたラインの描画処理、すなわち表示部に対する出力表示処理が完了した軌跡領域である。
 最新描画軌跡(直前軌跡)は、描画済み軌跡中のもっとも新しい軌跡領域であり、描画遅延部に接する軌跡領域である。
 直前軌跡は、具体的には、図4に示す最新描画位置72を含む所定長の描画済み領域である。ステップS101では、この最新描画軌跡(直前軌跡)と類似する複数(k個)の軌跡領域(類似軌跡)を、描画済み軌跡から検索する。kは例えば、3,5,10,20,30等、予め規定した数とする。
The drawn trajectory is a trajectory region in which the trajectory analysis of the input device is completed and the line drawing processing corresponding to the trajectory, that is, the output display processing for the display unit is completed.
The latest drawing trajectory (immediately preceding trajectory) is the latest trajectory region in the drawn trajectory, and is a trajectory region in contact with the drawing delay unit.
Specifically, the immediately preceding locus is a drawn area of a predetermined length including the latest drawing position 72 shown in FIG. In step S101, a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawing trajectory (immediate previous trajectory) are searched from the drawn trajectories. For example, k is a predetermined number such as 3, 5, 10, 20, 30 or the like.
 なお、例えば描画の初期段階等の場合、予め定めたk個の類似軌跡を検索できない場合は、検索された類似軌跡のみを用いた処理を行なうか、あるいは推定処理を中止する構成としてもよい。 Note that, for example, in the initial stage of drawing, when k similar trajectories determined in advance cannot be searched, a process using only the searched similar trajectories may be performed, or the estimation process may be stopped.
  (ステップS102)
 次に、ステップS102において、ステップS101で検索された複数(k個)の軌跡領域(類似軌跡)の後続軌跡を推定または選択し、これらの複数の後続軌跡を最新描画位置の先に接続する。
 最新描画位置は、図4に示す例に示す最新描画位置72に対応する。
(Step S102)
Next, in step S102, subsequent trajectories of a plurality (k) of trajectory regions (similar trajectories) searched in step S101 are estimated or selected, and these multiple subsequent trajectories are connected to the latest drawing position.
The latest drawing position corresponds to the latest drawing position 72 shown in the example shown in FIG.
  (ステップS103)
 次に、ステップS103において、接続した複数の後続軌跡の平均軌跡を算出する。
(Step S103)
Next, in step S103, an average trajectory of a plurality of connected subsequent trajectories is calculated.
  (ステップS104)
 最後にステップS104において、ステップS103で算出した平均軌跡を最終決定予測軌跡として、描画する処理を実行する。
(Step S104)
Finally, in step S104, a drawing process is executed with the average trajectory calculated in step S103 as the final determined predicted trajectory.
 図5に示すフローに従った処理の具体例について、図6を参照して説明する。図6には、図4に示すと同様の描画ラインを示している。左から右方向に専用ペン等の入力デバイスが軌跡を描き、軌跡に沿ったラインが描画された例である。 A specific example of processing according to the flow shown in FIG. 5 will be described with reference to FIG. FIG. 6 shows drawing lines similar to those shown in FIG. This is an example in which an input device such as a dedicated pen draws a locus from the left to the right, and a line along the locus is drawn.
 入力デバイスである専用ペンは、描画済み軌跡の最新描画位置81より先に進行しており、最新描画位置81より先に推定軌跡に従った予測軌跡を決定して描画する。なお、図6に示す例において、最終的に設定された予測軌跡は、図6に示す最終決定予測軌跡(Qf)86である。
 最終決定予測軌跡(Qf)86は、抽出したk個の類似軌跡に応じて設定されるk個の補助予測軌跡Q1~Q3を平均化する等の処理によって決定する。
The dedicated pen that is an input device proceeds ahead of the latest drawing position 81 of the drawn locus, and determines and draws a predicted locus according to the estimated locus before the latest drawing position 81. In the example shown in FIG. 6, the finally set prediction trajectory is the final determined prediction trajectory (Qf) 86 shown in FIG.
The final determined predicted trajectory (Qf) 86 is determined by a process such as averaging the k auxiliary predicted trajectories Q1 to Q3 set according to the extracted k similar trajectories.
 図5の各ステップS101~S104の処理について、図6を参照して説明する。
  (ステップS101)
 まず、ステップS101において、描画済み軌跡から、最新描画軌跡(直前軌跡)と類似する複数(k個)の軌跡領域(類似軌跡)を検索する。
The processing in steps S101 to S104 in FIG. 5 will be described with reference to FIG.
(Step S101)
First, in step S101, a plurality (k) of trajectory areas (similar trajectories) similar to the latest drawing trajectory (immediate trajectory) are searched from the drawn trajectories.
 図6に示す例を参照してこの処理について説明する。
 まず、図6に示す描画済み軌跡80から最新描画軌跡(直前軌跡P)82を抽出する。
 さらに、抽出した最新描画軌跡(直前軌跡P)82と類似する軌跡領域を描画済み軌跡から複数(k個)、検索する。
 図6に示す例では、k=3として、以下の3つの直前軌跡(P)の類似軌跡(Rn)が検索されたことを示している。
 直前軌跡(P)の類似軌跡(R1)83-1、
 直前軌跡(P)の類似軌跡(R2)83-2、
 直前軌跡(P)の類似軌跡(R3)83-3、
This process will be described with reference to the example shown in FIG.
First, the latest drawing locus (immediately preceding locus P) 82 is extracted from the drawn locus 80 shown in FIG.
Further, a plurality (k) of trajectory regions similar to the extracted latest drawn trajectory (previous trajectory P) 82 are searched from the drawn trajectories.
In the example shown in FIG. 6, it is indicated that the following similar trajectory (Rn) of the following three previous trajectories (P) is retrieved with k = 3.
Similar trajectory (R1) 83-1, immediately preceding trajectory (P),
Similar trajectory (R2) 83-2 of the previous trajectory (P),
Similar locus (R3) 83-3 of the immediately preceding locus (P),
 ステップS101の処理は、このように最新描画軌跡(直前軌跡P)82と類似するk個の類似軌跡を描画済み軌跡中から検索する処理である。 The process of step S101 is a process of searching k similar trajectories similar to the latest drawing trajectory (previous trajectory P) 82 from the drawn trajectories.
  (ステップS102)
 次に、ステップS102において、ステップS101で検索された複数(k個)の軌跡領域(類似軌跡)の後続軌跡を選択し、選択した複数の後続軌跡を最新描画位置の先に接続する。
(Step S102)
Next, in step S102, the subsequent trajectories of the plural (k) trajectory regions (similar trajectories) searched in step S101 are selected, and the selected plural subsequent trajectories are connected to the end of the latest drawing position.
 図6に示す例を参照してこの処理について説明する。
 ステップS101では、図6に示す3つの類似軌跡、すなわち、
 直前軌跡(P)の類似軌跡(R1)83-1、
 直前軌跡(P)の類似軌跡(R2)83-2、
 直前軌跡(P)の類似軌跡(R3)83-3、
 これらの類似軌跡を検索した。
This process will be described with reference to the example shown in FIG.
In step S101, three similar trajectories shown in FIG.
Similar trajectory (R1) 83-1, immediately preceding trajectory (P),
Similar trajectory (R2) 83-2 of the previous trajectory (P),
Similar locus (R3) 83-3 of the immediately preceding locus (P),
These similar trajectories were searched.
 ステップS102では、これらの類似軌跡の後続軌跡を選択する。
 図6に示す例では、以下の3つの後続軌跡を選択する。
 類似軌跡(R1)の後続軌跡(A1)84-1、
 類似軌跡(R2)の後続軌跡(A2)84-2、
 類似軌跡(R3)の後続軌跡(A3)84-3、
In step S102, the subsequent trajectory of these similar trajectories is selected.
In the example shown in FIG. 6, the following three following trajectories are selected.
Subsequent trajectory (A1) 84-1, similar trajectory (R1)
Subsequent locus (A2) 84-2 of the similar locus (R2),
Subsequent locus (A3) 84-3 of the similar locus (R3),
 さらに、これら3つの後続軌跡84-1~3を最新描画位置81の先に接続する。
 なお、直前軌跡Pと接続する後続軌跡との角度は、接続する後続軌跡と、その後続軌跡に対応する類似軌跡との接続角度に一致する角度とする。
Further, these three subsequent loci 84-1 to 8-3 are connected to the tip of the latest drawing position 81.
Note that the angle between the immediately preceding locus P and the subsequent locus to be connected is an angle that matches the connection angle between the succeeding locus to be connected and the similar locus corresponding to the succeeding locus.
 このようにして、図6に示すように、3つの後続軌跡A1~A3が、最新描画位置81に接続される。接続された各軌跡は、図6に示すように、以下の3つの補助予測軌跡である。
 後続軌跡(A1)対応の補助予測軌跡(Q1)85-1、
 後続軌跡(A2)対応の補助予測軌跡(Q2)85-2、
 後続軌跡(A3)対応の補助予測軌跡(Q3)85-3、
In this way, as shown in FIG. 6, the three subsequent trajectories A1 to A3 are connected to the latest drawing position 81. As shown in FIG. 6, the connected trajectories are the following three auxiliary prediction trajectories.
Auxiliary predicted trajectory (Q1) 85-1, corresponding to the subsequent trajectory (A1),
Auxiliary predicted trajectory (Q2) 85-2 corresponding to the following trajectory (A2),
Auxiliary predicted trajectory (Q3) 85-3 corresponding to the following trajectory (A3),
 ステップS102の処理は、このように、ステップS101で検出した類似軌跡の後続軌跡を最新描画位置の先に接続する処理である。 The process in step S102 is a process for connecting the subsequent locus of the similar locus detected in step S101 to the end of the latest drawing position.
  (ステップS103)
 次に、ステップS103において、接続した複数の後続軌跡の平均軌跡を算出する。
 この処理について、図6を参照して説明する。
(Step S103)
Next, in step S103, an average trajectory of a plurality of connected subsequent trajectories is calculated.
This process will be described with reference to FIG.
 図6では、最新描画位置81の先に接続した後続軌跡は、以下の3つの補助予測軌跡である。
 後続軌跡(A1)対応の補助予測軌跡(Q1)85-1、
 後続軌跡(A2)対応の補助予測軌跡(Q2)85-2、
 後続軌跡(A3)対応の補助予測軌跡(Q3)85-3、
In FIG. 6, the following trajectory connected to the end of the latest drawing position 81 is the following three auxiliary prediction trajectories.
Auxiliary predicted trajectory (Q1) 85-1, corresponding to the subsequent trajectory (A1),
Auxiliary predicted trajectory (Q2) 85-2 corresponding to the following trajectory (A2),
Auxiliary predicted trajectory (Q3) 85-3 corresponding to the following trajectory (A3),
 ステップS103では、これら3つの補助予測軌跡(Q1~Q3)85-1~3の平均軌跡を算出する。
 この結果、得られる平均軌跡は、図6に示す最終決定予測軌跡(Qf)86に示す軌跡となる。
In step S103, average trajectories of these three auxiliary predicted trajectories (Q1 to Q3) 85-1 to 3 are calculated.
As a result, the average trajectory obtained is the trajectory shown in the final determined predicted trajectory (Qf) 86 shown in FIG.
  (ステップS104)
 最後にステップS104において、ステップS103で算出した平均軌跡を最終決定予測ライン(最終決定予測軌跡)として、描画する処理を実行する。
 この処理について、図6を参照して説明する。
 ステップS104では、ステップS103で算出した平均軌跡、すなわち、図6に示す最終決定予測軌跡(Qf)86を、予測軌跡として描画する処理を実行する。
(Step S104)
Finally, in step S104, a process of drawing the average locus calculated in step S103 as the final decision prediction line (final decision prediction locus) is executed.
This process will be described with reference to FIG.
In step S104, a process of drawing the average trajectory calculated in step S103, that is, the final determined predicted trajectory (Qf) 86 shown in FIG. 6 as a predicted trajectory is executed.
  [3.描画軌跡の座標情報を用いた予測処理例について]
 図5に示すフローに従った処理は、情報処理装置の表示部に表示する各画像フレーム(t)に更新された描画軌跡の座標情報(x,y)を用いて実行することが可能である。
 以下、この座標情報を用いた予測軌跡の決定処理について説明する。
[3. Prediction processing example using drawing trajectory coordinate information]
The processing according to the flow shown in FIG. 5 can be executed using the coordinate information (x t , y t ) of the drawing trajectory updated for each image frame (t) displayed on the display unit of the information processing apparatus. It is.
Hereinafter, the process of determining the predicted trajectory using this coordinate information will be described.
 情報処理装置は、例えば、表示部に表示する各画像フレーム(t)単位で新たに表示される描画軌跡の座標情報(x,y)をメモリに格納する。情報処理装置は、最新描画位置から過去の一定時間の描画軌跡に対応する座標情報をメモリに格納し、この座標情報を用いて、軌跡の類似判定処理や、後続軌跡の接続処理、さらに、接続された後続軌跡の平均値算出による最終的な予測軌跡の決定処理などを実行する。 For example, the information processing apparatus stores, in a memory, coordinate information (x t , y t ) of a drawing trajectory newly displayed for each image frame (t) displayed on the display unit. The information processing apparatus stores, in a memory, coordinate information corresponding to a drawing trajectory of a past fixed time from the latest drawing position, and using this coordinate information, a trajectory similarity determination process, a subsequent trajectory connection process, and a connection The final predicted trajectory determination process is performed by calculating the average value of the subsequent trajectories.
 なお、座標情報(x,y)は、各表示フレーム(t)に対応してメモリに格納する。例えばフレームtに対応する最新の描画軌跡位置を示す座標は(x,y)としてメモリに格納する。次のフレームt+1に対応する最新の描画軌跡位置を示す座標は(xt+1,yt+1)としてメモリに格納する。このように、各フレームに対応付けられた軌跡位置情報としての座標情報がメモリに格納され、この情報を用いて軌跡の類似判定処理等が実行される。
 以下、この軌跡座標情報を用いた具体的な処理例について説明する。
The coordinate information (x, y) is stored in the memory corresponding to each display frame (t). For example, the coordinates indicating the latest drawing trajectory position corresponding to the frame t are stored in the memory as (x t , y t ). The coordinates indicating the latest drawing locus position corresponding to the next frame t + 1 are stored in the memory as (x t + 1 , y t + 1 ). In this way, coordinate information as trajectory position information associated with each frame is stored in the memory, and trajectory similarity determination processing and the like are executed using this information.
Hereinafter, a specific processing example using the locus coordinate information will be described.
  (3-1.類似判定処理について)
 図5に示すフローを参照して説明したステップS101では、最新描画軌跡(直前軌跡)と、類似する軌跡を描画済み軌跡から検索する処理を行なう。
 この類似判定処理において、軌跡の座標情報を用いて軌跡の特徴量を算出し、特徴量を比較して類似度を判定する。
(3-1. Similarity determination processing)
In step S101 described with reference to the flow shown in FIG. 5, a process of searching the latest drawn locus (immediately preceding locus) and a similar locus from the drawn locus is performed.
In this similarity determination process, the feature amount of the locus is calculated using the coordinate information of the locus, and the similarity is determined by comparing the feature amounts.
 類似判定に適用する特徴量として、例えば以下の特徴量を算出する。
 (1)速度:Z1
 (2)加速度:Z2
 (3)角度:Z3
 (4)角度差分:Z4
 これらの各特徴量を、描画軌跡に対応する座標情報を用いて算出する。
 さらに、最新描画軌跡(直前軌跡)を構成する座標情報から算出した特徴量と、描画済み軌跡を構成する各座標情報から算出した特徴量を比較して、より類似する特徴量を持つ描画済み軌跡の領域を類似軌跡として抽出する。これらの処理によって、例えば、図6に示す類似軌跡(R1~R3)84-1~3が抽出される。
For example, the following feature amounts are calculated as the feature amounts applied to the similarity determination.
(1) Speed: Z1
(2) Acceleration: Z2
(3) Angle: Z3
(4) Angle difference: Z4
Each of these feature amounts is calculated using coordinate information corresponding to the drawing trajectory.
Furthermore, the feature amount calculated from the coordinate information constituting the latest drawing locus (immediately preceding locus) is compared with the feature amount calculated from the coordinate information constituting the drawn locus, and the drawn locus having a more similar feature amount. Are extracted as similar trajectories. By these processes, for example, similar trajectories (R1 to R3) 84-1 to 3 shown in FIG. 6 are extracted.
 各特徴量Z1~Z4の算出式を以下に示す。
 (1)速度:Z1(t)=sqrt{(x-xt-1+(y-yt-1
 (2)加速度:Z2(t)=sqrt{(x-xt-1+(y-yt-1}/sqrt{(xt-1-xt-2+(yt-1-yt-2
 (3)角度:Z3(t)=atan{(x-xt-1)/(y-yt-1)}
 (4)角度差分:Z4(t)=atan{(x-xt-1)/(y-yt-1)}-atan{(xt-1-xt-2)/(yt-1-yt-2)}
Formulas for calculating the feature quantities Z1 to Z4 are shown below.
(1) Speed: Z1 (t) = sqrt {(x t −x t−1 ) 2 + (y t −y t−1 ) 2 }
(2) Acceleration: Z2 (t) = sqrt {(x t −x t−1 ) 2 + (y t −y t−1 ) 2 } / sqrt {(x t−1 −x t−2 ) 2 + (Y t-1 -y t-2 ) 2 }
(3) Angle: Z3 (t) = atan {(x t −x t−1 ) / (y t −y t−1 )}
(4) Angular difference: Z4 (t) = atan {(x t −x t−1 ) / (y t −y t−1 )} − atan {(x t−1 −x t−2 ) / (y t-1 -y t-2 )}
 なお、
 Z1(t)はフレームtにおける速度、
 Z2(t)はフレームtにおける加速度、
 Z3(t)はフレームtにおける角度、
 Z4(t)はフレームtにおける角度差分、
 これらを示している。
 また、
 sqrtは平方根(square root)、
 atanはアークタンジェント(arc tangent)
 を意味する。
In addition,
Z1 (t) is the velocity at frame t,
Z2 (t) is the acceleration at frame t,
Z3 (t) is the angle at frame t,
Z4 (t) is the angular difference at frame t,
These are shown.
Also,
sqrt is the square root,
atan is arc tangent
Means.
 なお、tは、本実施例ではフレーム番号を示すパラメータであるが、フレーム番号ではなくtを時間情報として設定した処理も可能である。
 すなわち、以下の説明においてフレームt,フレームuは時間t,時間uと置き換えることが可能である。
Note that t is a parameter indicating a frame number in the present embodiment, but processing in which t is set as time information instead of the frame number is also possible.
That is, in the following description, frame t and frame u can be replaced with time t and time u.
 上記特徴量Z1~Z4について、図7を参照して説明する。
 図7には描画済み軌跡の一部を示している。
 図には、表示部に表示されたフレームt-2~フレームt+1の各フレーム表示時の描画軌跡の最新位置の座標(xt-2,yt-2)~(xt+1,yt+1)を示している。すなわち、以下のP1~P4の4点である。
 なお、図7左下に示すように、xは図の水平方向、yは垂直方向に対応する。
The feature amounts Z1 to Z4 will be described with reference to FIG.
FIG. 7 shows a part of the drawn trajectory.
In the figure, the coordinates (x t−2 , y t−2 ) to (x t + 1 , y t + 1 ) of the latest positions of the drawing trajectories at the time of displaying each frame from frame t−2 to frame t + 1 displayed on the display unit are shown. Show. That is, the following four points P1 to P4.
As shown in the lower left of FIG. 7, x corresponds to the horizontal direction in the figure, and y corresponds to the vertical direction.
 P1:フレームt-2において表示された描画軌跡の最新位置座標(xt-2,yt-2
 P2:フレームt-1において表示された描画軌跡の最新位置座標(xt-1,yt-1
 P3:フレームtにおいて表示された描画軌跡の最新位置座標(x,y
 P4:フレームt+1において表示された描画軌跡の最新位置座標(xt+1,yt+1
P1: Latest position coordinates (x t-2 , y t-2 ) of the drawing trajectory displayed in the frame t-2
P2: Latest position coordinates (x t−1 , y t−1 ) of the drawing trajectory displayed in the frame t−1
P3: Latest position coordinates (x t , y t ) of the drawing trajectory displayed in frame t
P4: Latest position coordinates (x t + 1 , y t + 1 ) of the drawing trajectory displayed in frame t + 1
 上述した特徴量Z1(t)~Z4(t)は、フレームtに対応する座標(x,y)に対応する特徴量として算出される。
 フレームt以外の各フレーム対応の座標位置において、同様の特徴量を算出する。
The above-described feature amounts Z1 (t) to Z4 (t) are calculated as feature amounts corresponding to the coordinates (x t , y t ) corresponding to the frame t.
Similar feature values are calculated at coordinate positions corresponding to the frames other than the frame t.
  (速度Z1(t)について)
 フレームtの最新座標(x,y)に対応する特徴量の1つである速度Z1(t)は、以下の式によって算出する。
 速度:Z1(t)=sqrt{(x-xt-1+(y-yt-1
 これは、図7に示すP3(x,y)とP2(xt-1,yt-1)との距離Laに相当する。
 すなわち、フレームt-1からフレームtまでの間に軌跡の進んだ距離であり、1フレーム間の軌跡の移動速度に対応する。
(About speed Z1 (t))
A velocity Z1 (t), which is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
Speed: Z1 (t) = sqrt {(x t −x t−1 ) 2 + (y t −y t−1 ) 2 }
This corresponds to the distance La between P3 (x t , y t ) and P2 (x t−1 , y t−1 ) shown in FIG.
That is, the distance traveled by the trajectory from frame t-1 to frame t, and corresponds to the moving speed of the trajectory between one frame.
  (加速度Z2(t)について)
 フレームtの最新座標(x,y)に対応する特徴量の1つである加速度Z2(t)は、以下の式によって算出する。
 加速度:Z2(t)=sqrt{(x-xt-1+(y-yt-1}/sqrt{(xt-1-xt-2+(yt-1-yt-2
 これは、図7に示すP3(x,y)とP2(xt-1,yt-1)との距離Laと、P2(xt-1,yt-1)とP1(xt-2,yt-2)との距離Lbとの比率La/Lbに相当する。
 すなわち、フレームt-1からフレームtまでの間に軌跡の進んだ距離Laが、フレームt-2からフレームt-1までの間に軌跡の進んだ距離Lbの何倍に相当するかの値であり、現フレーム間の軌跡移動速度と先行フレーム間の軌跡移動速度との倍率に相当する。
(About acceleration Z2 (t))
The acceleration Z2 (t), which is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
Acceleration: Z2 (t) = sqrt {(x t −x t−1 ) 2 + (y t −y t−1 ) 2 } / sqrt {(x t−1 −x t−2 ) 2 + (y t -1 -y t-2 ) 2 }
This is because the distance La between P3 (x t , y t ) and P2 (x t−1 , y t−1 ) shown in FIG. 7, P2 (x t−1 , y t−1 ) and P1 (x This corresponds to the ratio La / Lb between the distance Lb and the distance t−2 , y t−2 ).
That is, the value of how many times the trajectory distance La from the frame t-1 to the frame t corresponds to the multiple of the trajectory distance Lb from the frame t-2 to the frame t-1. Yes, this corresponds to the magnification of the trajectory moving speed between the current frames and the trajectory moving speed between the preceding frames.
 なお、加速度を示す特徴量Z2(t)は、上記の例では、後の速度が前の速度の何倍に相当するかを示す速度の倍率として算出したが、後の速度と、前の速度との差分として算出してもよい。
 この場合、特徴量Z2(t)は以下の式によって算出する。
 加速度:Z2(t)=sqrt{(x-xt-1+(y-yt-1}-sqrt{(xt-1-xt-2+(yt-1-yt-2
 これは、図7に示すP3(x,y)とP2(xt-1,yt-1)との距離Laと、P2(xt-1,yt-1)とP1(xt-2,yt-2)との距離Lbとの差分La-Lbに相当する。
Note that the feature quantity Z2 (t) indicating acceleration is calculated as a magnification of the speed indicating how many times the subsequent speed corresponds to the previous speed in the above example, but the subsequent speed and the previous speed May be calculated as the difference between
In this case, the feature amount Z2 (t) is calculated by the following equation.
Acceleration: Z2 (t) = sqrt {(x t −x t−1 ) 2 + (y t −y t−1 ) 2 } −sqrt {(x t−1 −x t−2 ) 2 + (y t -1 -y t-2 ) 2 }
This is because the distance La between P3 (x t , y t ) and P2 (x t−1 , y t−1 ) shown in FIG. 7, P2 (x t−1 , y t−1 ) and P1 (x This corresponds to the difference La−Lb between the distance Lb and t−2 , y t−2 ).
  (角度Z3(t)について)
 フレームtの最新座標(x,y)に対応する特徴量の1つである角度Z3(t)は、以下の式によって算出する。
 角度:Z3(t)=atan{(x-xt-1)/(y-yt-1)}
 これは、図7に示すP3(x,y)とP2(xt-1,yt-1)を頂点とした線分Laと水平線Wa、垂直線Haとによって構成される三角形において、以下のように展開される。
 Z3(t)=atan{(x-xt-1)/(y-yt-1)}
 =atan(Wa/Ha)
 従って、Z3(t)=atan{(x-xt-1)/(y-yt-1)}
 は、図7に示すP3(x,y)とP2(xt-1,yt-1)を頂点とした線分Laと水平線Wa、垂直線Haとによって構成される三角形の頂点P3(x,y)の角度αに相当する。
(About angle Z3 (t))
An angle Z3 (t), which is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
Angle: Z3 (t) = atan {(x t −x t−1 ) / (y t −y t−1 )}
This is a triangle composed of a line segment La having vertices P3 (x t , y t ) and P2 (x t−1 , y t−1 ) shown in FIG. 7, a horizontal line Wa, and a vertical line Ha. It is expanded as follows.
Z3 (t) = atan {(x t −x t−1 ) / (y t −y t−1 )}
= Atan (Wa / Ha)
Therefore, Z3 (t) = atan {(x t −x t−1 ) / (y t −y t−1 )}
Is, P3 shown in FIG. 7 (x t, y t) and P2 (x t-1, y t-1) and the apex line segments La and the horizontal line Wa, the apex of the triangle formed by the vertical line Ha P3 This corresponds to the angle α of (x t , y t ).
  (角度差分Z4(t)について)
 フレームtの最新座標(x,y)に対応する特徴量の1つである角度差分Z4(t)は、以下の式によって算出する。
 角度差分:Z4(t)=atan{(x-xt-1)/(y-yt-1)}-atan{(xt-1-xt-2)/(yt-1-yt-2)}
 これは、図7に示すP3(x,y)とP2(xt-1,yt-1)を頂点とした線分Laと、水平線Wa、垂直線Haとによって構成される三角形の頂点P3(x,y)の角度α、さらに、
 図7に示すP2(xt-1,yt-1)とP1(xt-2,yt-2)を頂点とした線分Lbと、水平線Wb、垂直線Hbとによって構成される三角形の頂点P2(xt-1,yt-1)の角度β、
 これらの各度の差分(α-β)に相当する。
(Regarding the angle difference Z4 (t))
An angle difference Z4 (t), which is one of the feature amounts corresponding to the latest coordinates (x t , y t ) of the frame t, is calculated by the following equation.
Angular difference: Z4 (t) = atan {(x t −x t−1 ) / (y t −y t−1 )} − atan {(x t−1 −x t−2 ) / (y t−1 -Y t-2 )}
This is a triangular shape composed of a line segment La having apexes P3 (x t , y t ) and P2 (x t−1 , y t−1 ) shown in FIG. 7, a horizontal line Wa, and a vertical line Ha. The angle α of the vertex P3 (x t , y t ),
A triangle constituted by a line segment Lb having P2 (x t−1 , y t−1 ) and P1 (x t−2 , y t−2 ) as vertices, a horizontal line Wb, and a vertical line Hb shown in FIG. Angle β of vertex P2 (x t−1 , y t−1 ) of
It corresponds to the difference (α−β) of each degree.
 これらの4つの特徴量Z1~Z4を各フレームの最新座標点について、順次取得しメモリに格納しておく。なお、メモリ容量に限界がある場合は、例えば図6に示す最新描画位置81から、過去100フレーム分等、予め規定したフレーム数に対応する座標情報をメモリに格納する設定として、より新しいフレームに対応する座標情報の入力に応じて最も古いフレームの座標情報を削除していくメモリデータの更新を実行してもよい。 These four feature quantities Z1 to Z4 are sequentially obtained for the latest coordinate point of each frame and stored in the memory. When the memory capacity is limited, for example, the latest drawing position 81 shown in FIG. 6 is used to set the coordinate information corresponding to the predetermined number of frames, such as the past 100 frames, to a newer frame. The update of the memory data in which the coordinate information of the oldest frame is deleted may be executed in response to the input of the corresponding coordinate information.
 なお、メモリ容量が十分な場合は、筆跡対応の軌跡データを過去分も含めて、不揮発性メモリに格納し、情報処理装置の電源をオフとした場合にもメモリに残し、新たな電源ON後に描画を開始した場合に、その過去の蓄積データを利用した類似判定等を行い予測軌跡の推定を行う構成としてもよい。
 また、メモリ蓄積データをユーザ識別子(ユーザID)と対応づけて、利用ユーザに応じてユーザ対応の蓄積データを利用した処理を行なう構成としてもよい。
If the memory capacity is sufficient, handwriting-corresponding trajectory data, including past data, is stored in the non-volatile memory, and it remains in the memory even when the information processing device is turned off. When drawing is started, the prediction trajectory may be estimated by performing similarity determination using the past accumulated data.
Further, the memory storage data may be associated with the user identifier (user ID), and processing using the user-corresponding storage data may be performed according to the user.
 類似軌跡の検索は、メモリに格納された複数フレーム分の軌跡対応の座標情報を用いて上記の特徴量を算出し、特徴量を比較して行う。
 なお、比較対象となるのは、直前軌跡の特徴量と、その他の過去の軌跡から得られる特徴量である。
 この比較処理によって類似度の高い過去の軌跡領域を、予め決めた個数k個、例えば図6の例では3個、選択する。
The search for the similar trajectory is performed by calculating the feature amount using the coordinate information corresponding to the trajectory for a plurality of frames stored in the memory, and comparing the feature amounts.
Note that the comparison target is the feature amount of the immediately preceding locus and the feature amount obtained from the other past locus.
By this comparison processing, a predetermined number k of past locus regions having a high degree of similarity are selected, for example, three in the example of FIG.
 類似度の判定式を以下に示す。
 D(t,t')=ΣΣ j=1(Z(t-i)-Z(t'-i))
 上記が類似度判定式であり、
 上記式によって算出される特徴量距離D(t,t')が小さいものほど類似度が高いと判定する。
A determination formula for similarity is shown below.
D (t, t ′) = Σ i Σ 4 j = 1 w j (Z j (t i) −Z j (t′−i)) 2
The above is the similarity determination formula,
It is determined that the similarity is higher as the feature amount distance D (t, t ′) calculated by the above equation is smaller.
 なお、上記類似度判定式において、
 t'は、図6に示す最新描画位置81の座標(xt',yt')の軌跡を更新軌跡として表示したフレーム番号に対応する。すなわち予測軌跡直前の最新軌跡の描画を実行したフレーム番号である。
 tは、描画済み軌跡上の任意の位置の座標(x,y)を表示した過去の任意のフレームに対応する。
In the above similarity determination formula,
t ′ corresponds to the frame number in which the locus of the coordinates (x t ′ , y t ′ ) of the latest drawing position 81 shown in FIG. 6 is displayed as the update locus. That is, it is the frame number on which the latest locus just before the predicted locus has been drawn.
t corresponds to a past arbitrary frame displaying coordinates (x t , y t ) of an arbitrary position on the drawn trajectory.
 iは、類似比較の対象とするフレーム区間に相当し、例えば図6に示す直前軌跡82の軌跡の描画に必要となるフレーム数に相当する。
 例えば、図6に示す直前軌跡82が5フレームの表示処理によって生成された軌跡である場合、5フレーム分の座標位置の情報を適用した比較処理が実行される。
i corresponds to a frame section to be subjected to similarity comparison, and corresponds to, for example, the number of frames necessary for drawing the locus of the immediately preceding locus 82 shown in FIG.
For example, when the immediately preceding trajectory 82 shown in FIG. 6 is a trajectory generated by the display process for five frames, a comparison process using information on coordinate positions for five frames is executed.
 すなわち、図8に示すように、直前軌跡82が5フレームの表示処理によって生成された軌跡である場合、直前積と類似判定対象とするのは、直前軌跡82より前の描画済み軌跡80から選択されるすべての5フレーム分の軌跡情報となる。 That is, as shown in FIG. 8, when the previous trajectory 82 is a trajectory generated by the display process of 5 frames, the similarity determination target for the previous product is selected from the drawn trajectories 80 before the previous trajectory 82. This is the trajectory information for all five frames.
 jは、特徴量の種類に応じた係数である。すなわちZ1~Z4の各係数1~4に対応する。
 wjは、各特徴量Zj=Z1~Z4各々に対して設定する重みである。この重みは、状況に応じて様々な値を設定可能である。具体的には、すべての重みwj=1とする一率の値としてもよい。
j is a coefficient corresponding to the type of feature quantity. That is, it corresponds to the respective coefficients 1 to 4 of Z1 to Z4.
wj is a weight set for each feature quantity Zj = Z1 to Z4. This weight can be set to various values depending on the situation. Specifically, all the weights wj = 1 may be set to one value.
 また、例えば表示装置がフルHDの場合の重み設定例として、
 速度Z1に対応する重みw1=1、
 加速度Z2に対応する重みw2=100、
 角度Z3に対応する重みw3=10、
 角度差分Z4に対応する重みw4=1000、
 上記のような重み設定としてもよい。
For example, as an example of weight setting when the display device is full HD,
Weight w1 = 1 corresponding to speed Z1,
Weight w2 = 100 corresponding to acceleration Z2;
Weight w3 = 10 corresponding to angle Z3,
Weight w4 = 1000 corresponding to angle difference Z4,
The above weight setting may be used.
 上記特徴量距離D(t,t')を算出して、この特徴量距離の小さい順に、予め定めた選択個数、すなわちk個を選択する。選択したk個の領域を類似軌跡とする。このようにして、例えば、図6に示す類似軌跡R1~R3が選択される。図6の例ではk=3である。 The feature amount distance D (t, t ′) is calculated, and a predetermined selection number, that is, k pieces, is selected in ascending order of the feature amount distance. The selected k areas are set as similar trajectories. In this way, for example, similar trajectories R1 to R3 shown in FIG. 6 are selected. In the example of FIG. 6, k = 3.
 なお、上記各特徴量に設定する重みwjは、その他、例えばより新しいデータに対する重みを大きくする構成としてもよい。すなわち、直前軌跡に距離的に近い軌跡を優先選択するように距離に応じた重みを設定した特徴量距離データの算出を行う構成としてもよい。
 より新しいデータに対する重みを大きくして類似度算出を行う場合の特徴量距離D(t,t')の算出式は、以下のように設定される。
 D(t,t')
 =Σpow(ε,i)Σ j=1(Z(t-i)-Z(t'-i))
 ただし、
 ε:予め設定した重み減衰率(0.0<ε≦1.0)
 pow(ε,i):重み
   (より新しい軌跡位置で1.0になり、古いほど小さい値を出力する関数)
 上記のように、より新しい軌跡の重みを大きく設定して特徴量距離を算出する設定としてもよい。
In addition, the weight wj set for each feature amount may be configured to increase the weight for newer data, for example. In other words, it may be configured to calculate feature amount distance data in which a weight according to the distance is set so as to preferentially select a trajectory that is close to the previous trajectory.
The formula for calculating the feature amount distance D (t, t ′) when the similarity is calculated by increasing the weight for newer data is set as follows.
D (t, t ′)
= Σ i pow (ε, i) Σ 4 j = 1 w j (Z j (t i) −Z j (t′−i)) 2
However,
ε: Weight attenuation rate set in advance (0.0 <ε ≦ 1.0)
pow (ε, i): Weight (function that outputs 1.0 at a newer locus position and outputs a smaller value as it gets older)
As described above, the feature distance may be calculated by setting a larger weight of a new locus.
  (3-2.予測軌跡の決定処理について)
 次に、上述した処理によって選択された複数の類似軌跡を用いて予測軌跡を決定する処理について説明する。
 図5に示すフローを参照して説明したステップS102~S104では、ステップS101で選択した複数の類似軌跡を、最新描画位置の先端に接続して、その平均化処理を行なって最終的な予測軌跡を決定する処理を行なっている。
 この予測軌跡決定処理について説明する。
(3-2. Predictive locus determination process)
Next, processing for determining a predicted trajectory using a plurality of similar trajectories selected by the above-described processing will be described.
In steps S102 to S104 described with reference to the flow shown in FIG. 5, a plurality of similar trajectories selected in step S101 are connected to the tip of the latest drawing position, and the averaging process is performed to obtain a final predicted trajectory. The process of determining is performed.
This predicted locus determination process will be described.
 予測軌跡を構成する座標(x,y)は、以下の式(予測軌跡座標算出式)に従って算出する。
 x=(1/k)Σ n=1(n)'
 y=(1/k)Σ n=1(n)'
The coordinates (x u , y u ) constituting the predicted trajectory are calculated according to the following formula (predicted trajectory coordinate calculation formula).
x u = (1 / k) Σ k n = 1 x u (n) ′
yu = (1 / k) [Sigma] kn = 1 yu (n) '
 なお、上記予測軌跡座標算出式中のuはフレーム番号に相当する。例えば図6に示す最新描画位置81の表示フレームのフレーム番号をtとしたとき、フレーム番号tより後のフレーム番号であり、t<uである。
 フレームuは、軌跡の描画が完了していない未来のフレームに相当する。
Note that u in the predicted trajectory coordinate calculation formula corresponds to a frame number. For example, when the frame number of the display frame at the latest drawing position 81 shown in FIG. 6 is t, the frame number is after the frame number t, and t <u.
The frame u corresponds to a future frame in which the drawing of the locus has not been completed.
 その他の各パラメータは以下の設定である。
 kは、抽出した類似軌跡の個数である。
 nは、1~kの変数である。
 x(n)'は、n=1~kの類似軌跡各々に対応する予測線のフレームuにおけるx座標、
 y(n)'は、n=1~kの類似軌跡各々に対応する予測線のフレームuにおけるy座標、
The other parameters are set as follows.
k is the number of extracted similar trajectories.
n is a variable of 1 to k.
x u (n) ′ is the x coordinate in the frame u of the prediction line corresponding to each of the similar trajectories of n = 1 to k,
y u (n) ′ is the y coordinate in the frame u of the prediction line corresponding to each of the similar trajectories of n = 1 to k,
 上記予測軌跡座標算出式に用いるn=1~kの類似軌跡各々に対応する予測線のフレームuにおけるx座標:x(n)'、y座標:y(n)'の算出処理について説明する。 The calculation process of the x coordinate: x u (n) ′ and the y coordinate: yu (n) ′ in the frame u of the predicted line corresponding to each of the n = 1 to k similar trajectories used in the predicted trajectory coordinate calculation formula will be described. To do.
 まず、前述の処理(図5に示すフローのステップS101)において抽出したk個の類似軌跡の先端位置(最新位置)の速度V(n),角度a(n),座標(x(n)',y(n)')を以下のように算出する。
 (1)最新速度:v(n)=sqrt{(x-xt-1+(y-yt-1
 (2)最新角度:a(n)=atan{(x-xt-1)/(y-yt-1)}
 (3)最新座標:x(n)'=x、y(n)'=y
 ただし、nは抽出した類似軌跡の個数(k個)に対応する変数であり、n=1~kである。
First, the velocity V t (n), angle a t (n), coordinates (x t (n) of the tip positions (latest positions) of k similar trajectories extracted in the above-described processing (step S101 in the flow shown in FIG. 5). n) ′, y t (n) ′) is calculated as follows.
(1) Latest speed: v t (n) = sqrt {(x t −x t−1 ) 2 + (y t −y t−1 ) 2 }
(2) Latest angle: a t (n) = atan {(x t −x t−1 ) / (y t −y t−1 )}
(3) Latest coordinates: x t (n) ′ = x t , y t (n) ′ = y t
Here, n is a variable corresponding to the number of extracted similar trajectories (k), and n = 1 to k.
 次に、n番目(n=1~k)の各類似軌跡に応じて予測される将来フレーム(フレームu)における速度:V(n)、角度:a(n)を以下の算出式に従って算出する。
 V(n)=Vu-1(n)Z(sn+u-t)
 a(n)=au-1(n)Z(sn+u-t)
 ただし、snは、n=1~kのk個の各類似軌跡のフレーム番号である。
Next, the velocity: V u (n) and the angle: a u (n) in the future frame (frame u) predicted according to the nth (n = 1 to k) similar trajectories are calculated according to the following calculation formula. calculate.
V u (n) = V u−1 (n) Z 2 (sn + ut)
a u (n) = a u−1 (n) Z 4 (sn + u−t)
Here, sn is a frame number of each of k similar trajectories with n = 1 to k.
 さらに、上記算出式を適用して、n番目(n=1~k)の各類似軌跡に応じて予測される将来フレーム(フレームu)におけるxy座標:(x(n)',y(n)')を以下の算出式に従って算出する。
 x(n)'=xu-1(n)'+V(n)cos(a(n))
 y(n)'=yu-1(n)'+V(n)sin(a(n))
Furthermore, by applying the above calculation formula, the xy coordinates in the future frame (frame u) predicted according to the nth (n = 1 to k) similar trajectories: (x u (n) ′, yu ( n) ') is calculated according to the following calculation formula.
x u (n) ′ = x u−1 (n) ′ + V u (n) cos (a u (n))
yu (n) '= yu-1 (n)' + Vu (n) sin ( au (n))
 上記式に従って、k個の類似軌跡に対応する将来フレームuにおけるxy座標が算出される。この各座標は、図6に示すk本の補助予測軌跡を構成するxy座標となる。 In accordance with the above formula, the xy coordinates in the future frame u corresponding to the k similar trajectories are calculated. These coordinates are xy coordinates constituting the k auxiliary prediction trajectories shown in FIG.
 上記式に従って算出したk個の類似軌跡に対応するフレームuに対応するxy座標(x(n)',y(n)')を適用して、先に説明した予測軌跡座標算出式を用いて、最終的な予測軌跡の構成座標を算出する。
 すなわち、前述したように、予測軌跡を構成する座標(x,y)は、以下の式(予測軌跡座標算出式)に従って算出する。
 x=(1/k)Σ n=1(n)'
 y=(1/k)Σ n=1(n)'
 上記式に従って算出する軌跡が、図6に示す最終決定予測軌跡(Qf)86となる。
Applying the xy coordinates (x u (n) ′, yu (n) ′) corresponding to the frame u corresponding to the k similar trajectories calculated according to the above formula, the predicted trajectory coordinate calculation formula described above is used. Use to calculate the constituent coordinates of the final predicted trajectory.
That is, as described above, the coordinates (x u , yu ) constituting the predicted trajectory are calculated according to the following formula (predicted trajectory coordinate calculation formula).
x u = (1 / k) Σ k n = 1 x u (n) ′
yu = (1 / k) [Sigma] kn = 1 yu (n) '
The trajectory calculated according to the above formula is the final determined predicted trajectory (Qf) 86 shown in FIG.
 なお、上記の軌跡算出式では、すべての類似軌跡に基づいて算出される予測軌跡の構成座標を単純に平均化して最終的な予測軌跡を算出する設定としているが、例えば、類似度の高い類似軌跡の重みを大きくして加算平均を実行して最終的な予測軌跡の座標を算出する構成としてもよい。 In the above trajectory calculation formula, the final predicted trajectory is calculated by simply averaging the constituent coordinates of the predicted trajectory calculated based on all similar trajectories. A configuration may be adopted in which the final predicted trajectory coordinates are calculated by increasing the weight of the trajectory and performing addition averaging.
 この処理について説明する。
 以下のように、パラメータを設定する。
 s:k個の類似点の時刻(またはフレーム)
 r:k個の類似点の類似順位(最も類似している点=1、最も類似していない点=k)
 q=pow(β,rn-1):n番目の類似点の重み
   (最も類似した点(r=1)で1.0になり、rの値が大きくなるほど小さくなる)
 β:予め設定した重み減衰率(0.0<β≦1.0)
This process will be described.
Set the parameters as follows:
s n : time (or frame) of k similar points
r n : similarity rank of k similar points (most similar point = 1, least similar point = k)
q n = pow (β, r n-1): n -th weights similarities (now in the most similar point (r n = 1) 1.0, the smaller the value of r n increases)
β: preset weight decay rate (0.0 <β ≦ 1.0)
 上記のパラメータを用いて、予測軌跡を構成する座標(x,y)を、以下の式(予測軌跡座標算出式)に従って算出する。
 x=(Σ n=1(n)')/(Σ n=1
 y=(Σ n=1(n)')/(Σ n=1
Using the above parameters, the coordinates (x u , yu ) constituting the predicted trajectory are calculated according to the following formula (predicted trajectory coordinate calculation formula).
x u = (Σ k n = 1 q n x u (n) ′) / (Σ k n = 1 q n )
y u = (Σ k n = 1 q n y u (n) ′) / (Σ k n = 1 q n )
 このように類似度に応じた重み付けを行って予測軌跡の座標を算出する構成としてもよい。 In this manner, the coordinates of the predicted trajectory may be calculated by performing weighting according to the similarity.
  [4.予測軌跡の信頼度に応じた処理例について]
 上述したkNN法など、いわゆる機械学習による予測処理を実行すると、「相対的に頻度の少ない軌跡」を描いた場合、予測精度が低下する。
 「頻度の少ない軌跡」とは、例えば「急激な方向転換(漢字などに多い)」、「止め」、「連続した曲線」などである。
[4. Example of processing according to the reliability of the predicted trajectory]
When a prediction process based on so-called machine learning such as the kNN method described above is executed, the prediction accuracy decreases when a “relatively infrequent locus” is drawn.
“Infrequent trajectory” is, for example, “rapid direction change (often used for kanji characters)”, “stop”, “continuous curve”, and the like.
 すなわち、頻繁に出現する軌跡については学習処理による予測精度が高まるが、頻度の少ない軌跡については予測精度が低下する。極端に言えば、直線や比較的滑らかな曲線以外は精度が落ちる。特にこれは、学習の進度が十分でない場合には顕著である。予測の精度が落ちると専用ペン等の入力デバイス位置とは異なった位置に予測軌跡が描画されてしまう確率が高くなる。
 このような誤った予測軌跡の描画は、ユーザにとって、かえって迷惑となる場合がある。
 従って、予測精度が低いと判定される場合には、予測結果を描画に反映させないことが好ましい。
 以下、この処理例について説明する。
That is, the prediction accuracy by the learning process is increased for a frequently appearing trajectory, but the prediction accuracy is decreased for a trajectory having a low frequency. In extreme terms, accuracy is reduced except for straight lines and relatively smooth curves. This is particularly noticeable when learning progress is not sufficient. If the accuracy of the prediction decreases, the probability that the predicted locus is drawn at a position different from the position of the input device such as the dedicated pen increases.
Such drawing of an incorrect predicted trajectory may be annoying for the user.
Therefore, when it is determined that the prediction accuracy is low, it is preferable not to reflect the prediction result in the drawing.
Hereinafter, this processing example will be described.
 まず、図9を参照して予測軌跡の信頼度について説明する。
 図9には描画済み軌跡90と、その先端部である直前軌跡91を示している。直前軌跡91の先に予測軌跡が設定される。
 上述したkNN法に基づいて、任意の時点においてk本の補助予測軌跡92が計算される。このk本の予測軌跡は、過去の似ているk本の軌跡をそれぞれ表しているため、直線などの単純な軌跡の場合にはk本の軌跡の位置が収束し、逆に急激な軌跡の変化などがあった場合にはk本の軌跡の位置はバラバラになってくる。
First, the reliability of the predicted trajectory will be described with reference to FIG.
FIG. 9 shows a drawn trajectory 90 and a previous trajectory 91 that is the tip portion thereof. A predicted locus is set ahead of the immediately preceding locus 91.
Based on the kNN method described above, k auxiliary prediction trajectories 92 are calculated at an arbitrary time point. Since the k predicted trajectories represent k similar trajectories in the past, in the case of a simple trajectory such as a straight line, the positions of the k trajectories converge, and conversely, When there is a change or the like, the positions of the k trajectories are scattered.
 複数の補助予測軌跡92が「バラバラになる」ということは、「過去の軌跡の中で、今回の軌跡に近い最適なものの確度が低い。すなわち、k本の平均値を取っても、その値が正解に近い確率が低いということである。そこで、補助予測軌跡k本の各座標位置の標準偏差σを算出し、算出した標準偏差σの大きさに応じた処理を行なう。
 具体的には、算出した標準偏差が小さいときにのみ、k本の補助予測軌跡の平均を算出して最終決定予測軌跡の座標位置を決定して描画する。これにより、ユーザが知覚される予測の間違いを軽減することができる。
The fact that a plurality of auxiliary predicted trajectories 92 are “disjoint” means that “the accuracy of an optimal past trajectory that is close to the current trajectory is low. Therefore, the standard deviation σ at each coordinate position of the k auxiliary prediction trajectories is calculated, and processing corresponding to the calculated standard deviation σ is performed.
Specifically, only when the calculated standard deviation is small, the average of k auxiliary predicted trajectories is calculated, and the coordinate position of the final determined predicted trajectory is determined and drawn. Thereby, the mistake of the prediction perceived by the user can be reduced.
 図9に示す円(補助予測軌跡の標準偏差95U1~3)は、直前軌跡91の表示フレームtの後の未来フレームU=t+1,t+2,t+3における複数(k個)の補助予測軌跡の座標点の標準偏差の大きさを概念的に示した円である。
 円の大きさが大きいほど標準偏差が大きい、すなわち補助予測軌跡のばらつきが大きく、平均値として算出される予定の最終決定予測軌跡が不確かであることを示している。
The circles (standard deviations 95U1 to 3 of the auxiliary prediction trajectory) shown in FIG. 9 are coordinate points of a plurality (k) of auxiliary prediction trajectories in the future frames U = t + 1, t + 2, and t + 3 after the display frame t of the previous trajectory 91. This is a circle conceptually showing the size of the standard deviation.
The larger the size of the circle, the larger the standard deviation, that is, the greater the variation in the auxiliary predicted trajectory, indicating that the final determined predicted trajectory to be calculated as an average value is uncertain.
 直前軌跡91の最終位置がフレームtにおいて表示された軌跡の座標位置に対応し、そのあとの小さい円95U1が、未来フレームU=t+1の複数の補助予測軌跡によって算出される複数の予測座標の標準偏差を示している。
 次の円95U2が未来フレームU=t+2の複数の補助予測軌跡によって算出される複数の予測座標の標準偏差を示している。
 次の円95U3が未来フレームU=t+3の複数の補助予測軌跡によって算出される複数の予測座標の標準偏差を示している。
The final position of the immediately preceding trajectory 91 corresponds to the coordinate position of the trajectory displayed in the frame t, and the subsequent small circle 95U1 is a standard of a plurality of predicted coordinates calculated by a plurality of auxiliary predicted trajectories in the future frame U = t + 1. The deviation is shown.
The next circle 95U2 shows the standard deviation of the plurality of predicted coordinates calculated by the plurality of auxiliary prediction trajectories of the future frame U = t + 2.
The next circle 95U3 shows the standard deviation of the plurality of predicted coordinates calculated by the plurality of auxiliary prediction trajectories of the future frame U = t + 3.
 標準偏差が小さいとき、すなわち予測にばらつきが少ない場合にのみ、k本の補助予測軌跡の平均を算出して最終決定予測軌跡の座標位置を決定して描画する。これにより、ユーザが知覚される予測の間違いを軽減することができる。 * Only when the standard deviation is small, that is, when there is little variation in prediction, the average of k auxiliary prediction trajectories is calculated and the coordinate position of the final determined prediction trajectory is determined and drawn. Thereby, the mistake of the prediction perceived by the user can be reduced.
 この標準偏差算出処理と処理結果に応じた処理変更を伴う予測軌跡の算出描画処理のシーケンスについて、図10に示すフローチャートを参照して説明する。 The sequence of this standard deviation calculation process and the calculation / drawing process of the predicted trajectory accompanied by the process change according to the process result will be described with reference to the flowchart shown in FIG.
 図10は、先に説明した図5のフローと同様、kNN法を適用した予測軌跡の推定と描画処理シーケンスについて説明するフローチャートを示す図である。
 なお、このフローチャートに示す処理は、本開示の情報処理装置のデータ処理部、具体的には、例えばプログラム実行機能を有するCPU等からなるデータ処理部の制御の下で実行される。プログラムは、例えば情報処理装置のメモリに格納される。
FIG. 10 is a diagram illustrating a flowchart for explaining a prediction trajectory estimation and drawing process sequence to which the kNN method is applied, similarly to the flow of FIG. 5 described above.
Note that the processing shown in this flowchart is executed under the control of a data processing unit of the information processing apparatus according to the present disclosure, specifically, a data processing unit including, for example, a CPU having a program execution function. The program is stored, for example, in the memory of the information processing apparatus.
  (ステップS201)
 ステップS201~S202は、先に説明した図5に示すフローのステップS101~S102の処理と同様の処理である。
 まず、ステップS201において、描画済み軌跡から、最新描画軌跡(直前軌跡)と類似する複数個(k個)の軌跡領域(類似軌跡)を検索する。
(Step S201)
Steps S201 to S202 are the same processing as the processing of steps S101 to S102 in the flow shown in FIG.
First, in step S201, a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawing trajectory (immediate previous trajectory) are searched from the drawn trajectories.
 描画済み軌跡とは、入力デバイスの軌跡解析が完了し、軌跡に応じたラインの描画処理、すなわち表示部に対する出力表示処理が完了した軌跡領域である。
 最新描画軌跡(直前軌跡)は、描画済み軌跡中のもっとも新しい軌跡領域であり、描画遅延部に接する軌跡領域である。
 直前軌跡は、具体的には、例えば図4に示す最新描画位置72を含む所定長の描画済み領域である。ステップS201では、この最新描画軌跡(直前軌跡)と類似する複数(k個)の軌跡領域(類似軌跡)を、描画済み軌跡から検索する。kは例えば、3,5,10,20,30等、予め規定した数とする。
The drawn trajectory is a trajectory region in which the trajectory analysis of the input device is completed and the line drawing processing corresponding to the trajectory, that is, the output display processing for the display unit is completed.
The latest drawing trajectory (immediately preceding trajectory) is the latest trajectory region in the drawn trajectory, and is a trajectory region in contact with the drawing delay unit.
Specifically, the immediately preceding locus is, for example, a drawn area of a predetermined length including the latest drawing position 72 shown in FIG. In step S201, a plurality (k) of trajectory regions (similar trajectories) similar to the latest drawing trajectory (immediate previous trajectory) are searched from the drawn trajectories. For example, k is a predetermined number such as 3, 5, 10, 20, 30 or the like.
  (ステップS202)
 次に、ステップS202において、ステップS201で検索された複数(k個)の軌跡領域(類似軌跡)の後続軌跡を推定または選択し、これらの複数の後続軌跡を最新描画位置の先に接続する。
 最新描画位置は、図4に示す例に示す最新描画位置72に対応する。
(Step S202)
Next, in step S202, subsequent trajectories of a plurality (k) of trajectory regions (similar trajectories) searched in step S201 are estimated or selected, and the plurality of subsequent trajectories are connected to the tip of the latest drawing position.
The latest drawing position corresponds to the latest drawing position 72 shown in the example shown in FIG.
  (ステップS203)
 ステップS203以降は、予測軌跡の標準偏差を考慮した処理となる。
 複数の類似軌跡に対応する複数の予測軌跡、すなわち、図9に示すk本の補助予測軌跡について、各未来フレーム単位で標準偏差算出を実行し、算出結果に応じてフレーム単位で処理を行なう。
 まず、ステップS203で、最初の未来フレームUをU=t+1として設定する。
 フレームtは、直前軌跡の先端の最新描画位置を表示したフレームのフレーム番号である。U=t+1は、最新描画位置の表示フレームtの次のフレームに相当する。
(Step S203)
After step S203, the process takes into account the standard deviation of the predicted trajectory.
For a plurality of prediction trajectories corresponding to a plurality of similar trajectories, that is, k auxiliary prediction trajectories shown in FIG. 9, standard deviation calculation is executed for each future frame, and processing is performed for each frame according to the calculation result.
First, in step S203, the first future frame U is set as U = t + 1.
Frame t is the frame number of the frame that displays the latest drawing position at the tip of the immediately preceding locus. U = t + 1 corresponds to the frame next to the display frame t at the latest drawing position.
  (ステップS204)
 ステップS204では、k個の補助予測軌跡のフレームUにおける座標位置の標準偏差σuを算出する。
 なお、k個の補助予測軌跡に対応する座標位置は、先に図5のフローを参照して説明した処理と同様の処理として実行する。
 ステップS204では、さらに、k個のフレームU対応のk個の座標位置の標準偏差σuを算出する。
(Step S204)
In step S204, the standard deviation σu of the coordinate position in the frame U of the k auxiliary prediction trajectories is calculated.
Note that the coordinate positions corresponding to the k auxiliary predicted trajectories are executed as the same process as described with reference to the flow of FIG.
In step S204, the standard deviation σu of k coordinate positions corresponding to k frames U is further calculated.
  (ステップS205)
 次に、ステップS205において、フレームU対応の標準偏差σuに基づいて予測軌跡の描画が妥当か否か、すなわち信頼度のある予測軌跡が描画できるか否かの判定処理を行なう。
(Step S205)
Next, in step S205, it is determined whether or not drawing of the predicted trajectory is appropriate based on the standard deviation σu corresponding to the frame U, that is, whether or not a predicted trajectory with reliability can be drawn.
 この信頼度判定は、以下の判定式を適用して実行する。
 判定式
 σu<αVt
This reliability determination is executed by applying the following determination formula.
Judgment formula σu <αVt
 上記式において、
 αVtが描画判定閾値に対応する。なお、
 α:予め設定した係数
 Vt:直前軌跡における描画速度
 である。
 Vtは、図9における直前軌跡の長さsに相当する。sは、フレームt-1とフレームtの1フレーム間で進んだ軌跡の距離に対応する。
 Vtは、直前の1フレームで進んだ軌跡の長さであり、直前軌跡81のスピードに相当する。
 係数αは、例えば、高信頼度の場合にのみ描画を許容する設定の場合は、値を小さくし、低信頼度でも描画を許容する場合は、値を大きくするなど、状況に応じて変更可能なパラメータであり、例えばユーザ設定可能な値としてもよい。
In the above formula,
αVt corresponds to the drawing determination threshold value. In addition,
α: Pre-set coefficient Vt: Drawing speed in the previous trajectory.
Vt corresponds to the length s of the immediately preceding locus in FIG. s corresponds to the distance of the trajectory advanced between one frame t-1 and frame t.
Vt is the length of the trajectory advanced in the immediately preceding frame, and corresponds to the speed of the immediately preceding trajectory 81.
The coefficient α can be changed according to the situation, for example, if the setting is to allow drawing only when the reliability is high, the value is reduced, and if the drawing is allowed even at low reliability, the value is increased. For example, a value that can be set by the user.
 上記判定式が成立する場合、すなわち標準偏差σuが閾値より小さい場合は、比較的信頼度の高い予測軌跡が決定できると判断し、ステップS206に進む。
 一方、上記判定式が成立しない場合、すなわち標準偏差σuが閾値以上である場合は、信頼度の高い予測軌跡の描画は困難であると判断し、ステップS211に進む。
If the above determination formula is satisfied, that is, if the standard deviation σu is smaller than the threshold value, it is determined that a relatively reliable prediction trajectory can be determined, and the process proceeds to step S206.
On the other hand, if the determination formula is not satisfied, that is, if the standard deviation σu is equal to or greater than the threshold, it is determined that it is difficult to draw a highly reliable predicted trajectory, and the process proceeds to step S211.
  (ステップS206)
 ステップS206では、複数(k個)の補助予測軌跡のフレームU対応のk個の座標の平均座標を算出する。
(Step S206)
In step S206, an average coordinate of k coordinates corresponding to the frame U of a plurality (k) of auxiliary prediction trajectories is calculated.
  (ステップS207)
 ステップS207では、ステップS206で算出した平均座標をフレームU対応の最終決定予測軌跡の座標として、決定し、予測軌跡の描画処理を実行する。
(Step S207)
In step S207, the average coordinates calculated in step S206 are determined as the coordinates of the final determined predicted trajectory corresponding to the frame U, and the predicted trajectory drawing process is executed.
  (ステップS208)
 ステップS208では、フリームUの先の描画処理を実行すべき未処理フレームがあるか否かを判定する。
 ある場合は、ステップS209に進み、ない場合は処理を終了する。
(Step S208)
In step S208, it is determined whether or not there is an unprocessed frame in which the drawing process of the freem U is to be executed.
If there is, the process proceeds to step S209, and if not, the process ends.
  (ステップS209)
 ステップS209では、フレーム番号Uの更新処理を実行する。すなわち、
 U=U+1
 上記設定のフレーム番号の更新なを実行して、ステップS204以降において、次のフレームの処理を開始する。
 すべての未処理フレームの処理が終了すると処理を終了する。
(Step S209)
In step S209, the frame number U is updated. That is,
U = U + 1
The frame number that has been set is updated, and the processing of the next frame is started in step S204 and subsequent steps.
When all the unprocessed frames have been processed, the process ends.
  (ステップS211)
 ステップS211は、ステップS205の判定処理でNoの判定が行われた場合に実行する処理である。
 すなわち、ステップS205において、標準偏差σuが閾値以上であり、信頼度の高い予測軌跡の描画は困難であると判断した場合に実行する。
 この場合、ステップS211において、予測処理を終了する決定を行う。あるいは、従来型の静的予測処理に切り替える予測方式変更処理を行なう。
(Step S211)
Step S211 is a process executed when No is determined in the determination process of step S205.
In other words, the process is executed when it is determined in step S205 that the standard deviation σu is equal to or greater than the threshold value and it is difficult to draw a highly reliable predicted trajectory.
In this case, in step S211, a determination is made to end the prediction process. Or the prediction system change process switched to the conventional static prediction process is performed.
 このように、図10に示すフローに従った処理を実行することで、信頼度の高い場合に限って予測軌跡を描画することが可能となり、信頼度の低い予測軌跡を描画することによる誤った予測軌跡の表示を抑制可能となる。 As described above, by executing the processing according to the flow shown in FIG. 10, it becomes possible to draw a predicted trajectory only when the reliability is high, and it is erroneous due to drawing a predictive trajectory with low reliability. The display of the predicted trajectory can be suppressed.
  [5.入力デバイスの圧力(Pressure)検出情報を適用した処理について]
 例えば入力デバイスとして専用ペンを適用した場合、ペンの表示部に対する圧力を検出し、その圧力に応じた信頼度算出を行い、算出した信頼度に応じた予測軌跡の描画制御を行なうことができる。
[5. About processing using pressure detection information of input device]
For example, when a dedicated pen is applied as an input device, it is possible to detect the pressure on the display unit of the pen, calculate reliability according to the pressure, and perform drawing control of the predicted trajectory according to the calculated reliability.
 なお、入力デバイスの表示部に対する圧力(pressure)値は、例えば、入力デバイス、または、表示部表面に設定された圧力検出センサが検出し、検出値が制御部に入力され、信頼度算出に利用される。 The pressure value for the display unit of the input device is detected by, for example, the input device or a pressure detection sensor set on the surface of the display unit, and the detected value is input to the control unit and used for reliability calculation. Is done.
 入力デバイスとして専用ペンを利用している場合、ペン入力における特徴として、画面からペンが離れる際、ペンが表示部表面から離れる(リリース)数フレーム前から徐々に、圧力値が減少する。
 特に、文字を描画している場合の文字の『はらい』部分を描いているとき等に、圧力値が徐々に減少していく傾向が顕著となる。
When a dedicated pen is used as an input device, as a feature of pen input, when the pen leaves the screen, the pressure value gradually decreases from several frames before the pen leaves (releases) the display unit surface.
In particular, the tendency of the pressure value to gradually decrease becomes noticeable when the character “Hara” of the character is drawn.
 前述した実施例では、類似軌跡の判定処理に利用する特徴量として、以下の特徴量を利用していた。
 (1)速度:Z1
 (2)加速度:Z2
 (3)角度:Z3
 (4)角度差分:Z4
 これらの各特徴量に加え、さらに、入力デバイスの表示部に対する筆圧に相当する圧力(pressure)値:Z5、および圧力値変化量:Z6を追加する。
In the embodiment described above, the following feature amounts are used as the feature amounts used for the determination process of the similar trajectory.
(1) Speed: Z1
(2) Acceleration: Z2
(3) Angle: Z3
(4) Angle difference: Z4
In addition to each of these feature amounts, a pressure value: Z5 and a pressure value change amount: Z6 corresponding to the writing pressure applied to the display unit of the input device are added.
 具体的には、フレームt(または時刻t)における圧力値をpとして、特徴量Z5、Z6を以下のように定義する。
 フレームtの圧力値:Z5(t)=p
 フレームtの圧力値変化量:Z6(t)=p-pt-1
 これらの特徴量Z5、Z6を追加して類似軌跡の判定を行なう。
Specifically, the pressure value in the frame t (or time t) as p t, is defined as follows a feature amount Z5, Z6.
Pressure value of frame t: Z5 (t) = pt
Pressure value change amount of frame t: Z6 (t) = p t −p t−1
These feature amounts Z5 and Z6 are added to determine a similar locus.
 なお、類似判定後の予測軌跡における筆圧予測を行う場合、予測軌跡上の将来フレームu(または将来時刻t)における予測筆圧(圧力値)puは以下の式で算出可能である。
 予測筆圧:p=(1/k)Σ n=1sn+u-t
In addition, when performing the pen pressure prediction in the predicted trajectory after the similarity determination, the predicted pen pressure (pressure value) pu in the future frame u (or future time t) on the predicted trajectory can be calculated by the following equation.
Predictive writing pressure: p u = (1 / k) Σ k n = 1 p sn + u−t
 なお、uはフレーム番号(または時間)
 kは、抽出した類似軌跡の個数である。
 nは、1~kの変数である。
 snは、n=1~kのk個の各類似軌跡のフレーム番号(または時間)、
 である。
U is the frame number (or time)
k is the number of extracted similar trajectories.
n is a variable of 1 to k.
sn is the frame number (or time) of each of k similar trajectories with n = 1 to k,
It is.
 学習処理の特徴量として筆圧に相当する圧力値を導入することで、例えばペンのリリース時の予測の精度が向上する。 導入 By introducing a pressure value corresponding to writing pressure as a feature value of the learning process, for example, the accuracy of prediction at the time of pen release is improved.
  [6.予測軌跡の算出、描画処理の変形例について]
 次に、上述の実施例を基本処理として、さらに効果を高めることが期待できるいくつかの変形例を以下に説明する。
[6. Calculation of predicted trajectory and modification of drawing process]
Next, some modified examples that can be expected to further improve the effect will be described below using the above-described embodiment as a basic process.
  (6-1)描画オブジェクトの種類に応じて学習パターンを変更する処理
 例えば、ユーザが入力する描画オブジェクトの種類に応じて学習パターンを変更する。具体的には、ユーザの描画オブジェクトが絵であるか、文字であるか、さらに、文字である場合、その種類、例えばアルファベット、日本語であるか、漢字であるかひらがなであるか等、文字の種類にに応じて学習パターンを変える。
(6-1) Process of changing learning pattern according to type of drawing object For example, the learning pattern is changed according to the type of drawing object input by the user. Specifically, if the user's drawing object is a picture, a character, or even a character, the type, for example, alphabet, Japanese, kanji or hiragana, etc. Change the learning pattern according to the type.
 絵やテキスト等、描く対象毎に形の特徴が異なることを応用して予測処理を実行する。例えば、漢字はアルファベットに比べて直線や鋭角な切り替えし、短い線等が多く、全体的に長い曲線を多用するアルファベットとは特徴が大きく異なっている。
 このように、描画オブジェクトの種類に応じて、類似軌跡の抽出や予測軌跡の決定に利用する特徴量を変更することで、より精度の高い処理が可能となる。
The prediction process is executed by applying the fact that the shape features differ for each drawing object such as a picture or text. For example, compared to alphabets, Kanji has a straight line or sharp angle switch, has many short lines, etc., and the characteristics are greatly different from alphabets that use a lot of long curves as a whole.
As described above, by changing the feature amount used for the extraction of the similar trajectory and the determination of the predicted trajectory according to the type of the drawing object, it is possible to perform processing with higher accuracy.
  (6-2)視線とペン先の相対位置によって予測軌跡の描画態様を変更する処理
 予測軌跡を表示部に描画する場合、視線とペン先の相対位置によって予測軌跡の描画態様を変更する処理例について説明する。
(6-2) Process for Changing Predicted Trajectory Drawing Mode Depending on Relative Position of Line of Sight and Pen Tip When drawing a predicted trajectory on the display unit, an example of processing for changing the drawing mode of the predicted trajectory depending on the relative position of the line of sight and the pen tip Will be described.
 例えば入力デバイスとして専用ペンを用いて線を描画する場合、右ききのユーザが右手にペンを持ち、左から右方向に線を描く場合、右手の左側のスペースはよく観察できる。しかし、右から左に線を描く場合、右側のスペースは手にかくれて見えなくなる。 For example, when a line is drawn using a dedicated pen as an input device, a right-handed user holds the pen in the right hand, and when drawing a line from left to right, the space on the left side of the right hand can be observed well. However, if you draw a line from right to left, the right side of the space will be hidden by hand.
 このように見えなくなる部分に予測軌跡を描画してもユーザからは、よく観察できない。従って、このような部分の予測処理は実行しない。あるいは予測フレーム数を減少させるといった対応を行う。
 すなわち、左から右方向に線分を描画している場合は予測フレーム数ほ増加させ、右から左方向に線分を描画している場合は、予測を停止または予測フレーム数を減少させる制御を行う、
 なお、左ききのユーザの場合は、右ききのユーザの場合の処理と逆の処理を行なうことになる。
Even if the predicted trajectory is drawn in such a portion that cannot be seen, the user cannot observe it well. Therefore, such a prediction process is not executed. Alternatively, measures such as reducing the number of predicted frames are taken.
That is, if the line segment is drawn from the left to the right, the number of predicted frames is increased, and if the line segment is drawn from the right to the left, control is performed to stop the prediction or decrease the number of predicted frames. Do,
In the case of a left-handed user, the process opposite to that for a right-handed user is performed.
  (6-3.その他の予測処理との組み合わせを適用した処理例について)
 上述した軌跡の予測処理について、その他の予測処理を併せて利用することで、さらにな予測精度を高めることが可能となる。
(6-3. Example of processing using a combination with other prediction processing)
By using other prediction processes together with the above-described trajectory prediction process, it is possible to further improve the prediction accuracy.
  ((a)単語予測処理との併用)
 例えば、ユーザの描画オブジェクトが文書である場合、単語予測を行い、単語予測結果を利用して予測の精度を向上させることができる。
 例えば、次に書く文字を単語予測によって推定し、推定文字に応じた軌跡を推定して、この推定結果と類似する補助予測軌跡の重みを高めるといった処理を行なう。
((A) Combined use with word prediction processing)
For example, when the user's drawing object is a document, word prediction can be performed, and the prediction accuracy can be improved using the word prediction result.
For example, the next character to be written is estimated by word prediction, the locus corresponding to the estimated character is estimated, and the weight of the auxiliary prediction locus similar to the estimation result is increased.
  ((b)動的予測と静的予測との切り替え利用処理)
 上述した過去の軌跡から類似軌跡を検索して、予測軌跡を決定する学習処理を適用した処理は、動的予測処理であるが、直前軌跡のみのデータを利用した線形近似等による軌跡推定処理を行なう従来型の静的予測処理も有効な場合がある。
((B) Processing for switching between dynamic prediction and static prediction)
The process of searching for similar trajectories from the past trajectories and applying the learning process for determining the predicted trajectory is a dynamic prediction process. However, the trajectory estimation process by linear approximation using only the data of the previous trajectory is performed. Conventional static prediction processing performed may be effective.
 描画アプリの起動時やユーザが変わったときなど、その環境における学習が十分に進んでいない場合には、動的な予測方式では精度が下がる。このような場合には静的な予測方式を採用した予測処理のほうが有効である場合がある。その後、類似軌跡を検索するに十分な描画済み軌跡が得られた場合には動的予測に切り替える。
 このように、状況に応じて動的予測と静的予測を切り換えることで、状況に応じた最適予測が可能となる。
When learning in the environment is not sufficiently advanced, such as when the drawing application is activated or when the user changes, the accuracy of the dynamic prediction method decreases. In such a case, a prediction process using a static prediction method may be more effective. Thereafter, when a drawn trajectory sufficient for searching for a similar trajectory is obtained, the dynamic prediction is switched.
In this way, by switching between dynamic prediction and static prediction according to the situation, optimum prediction according to the situation becomes possible.
  ((c)文字データベースを考慮した予測との併用)
 一般的に文字入力可能な機器では、文字データベース(フォントデータベース)がメモリに格納されている。このような機器では、ユーザの描画軌跡に類似する文字を文字データベースから検索する処理を行なう機能を有しているものが多い。
 この機能を利用し、ユーザの書いている文字と類似する文字をデータベースから選択し、選択文字に応じた軌跡を推定して、この推定結果と類似する補助予測軌跡の重みを高めるといった処理を行なう。
((C) Combined with prediction considering character database)
Generally, in a device capable of inputting characters, a character database (font database) is stored in a memory. Many of such devices have a function of performing a process of searching a character database for characters similar to a user's drawing trajectory.
Using this function, a character similar to the character written by the user is selected from the database, a trajectory corresponding to the selected character is estimated, and the weight of the auxiliary predicted trajectory similar to the estimation result is increased. .
  ((d)文字スケールに依存しない予測処理例)
 例えば、ユーザが文字を描く場合、その大きさ(スケール)は時と場合によって異なる。上述した実施例では、類似軌跡の判定において、スケールが異なる場合に、性格な類似判定が困難になる場合がある。
 この問題を解消するため、例えばユーザの描いた1つの文字の文字サイズを判定し、判定した文字サイズを予め設定した標準サイズを持つ絶対スケールに変換する処理を実行して、絶対スケールを持つ軌跡を用いた比較処理等を実行する。
 なお、スケール変換処理は、例えば拡大処理、縮小処理、補間処理、間引き処理などを組み合わせて実行する。
 このような処理を行なうことで、スケールの差に基づく誤判定を減少させることが可能となる。
((D) Example of prediction processing independent of character scale)
For example, when a user draws a character, the size (scale) differs depending on the time and the case. In the above-described embodiment, in the determination of the similar trajectory, when the scale is different, it may be difficult to determine the personal similarity.
In order to solve this problem, for example, a character size of one character drawn by the user is determined, a process of converting the determined character size into an absolute scale having a preset standard size is performed, and a locus having the absolute scale is executed. A comparison process using is performed.
Note that the scale conversion processing is executed by combining, for example, enlargement processing, reduction processing, interpolation processing, and thinning processing.
By performing such processing, it is possible to reduce erroneous determinations based on scale differences.
  (6-4.その他の変形例)
 予測軌跡をどこまで描画するかについては様々な設定が可能である。例えば、情報処理装置の処理能力に応じた最大の処理を行なう設定としてもよいし、予め規定した実際のペン位置から所定距離、手前の位置までとする設定や、ユーザの希望する範囲にユーザ設定可能な構成としてもよい。
 なお、情報処理装置の処理能力(パフォーマンス)をベンチマークして算出して算出結果に応じて予測軌跡の描画範囲を領域する構成としてもよい。
(6-4. Other modifications)
Various settings can be made as to how far the predicted trajectory is drawn. For example, it may be set to perform maximum processing according to the processing capability of the information processing apparatus, set to a predetermined distance from the actual pen position specified in advance, to the position in front, or set to a user's desired range It is good also as a possible structure.
Note that the processing capability (performance) of the information processing apparatus may be benchmarked and calculated, and the prediction trajectory drawing range may be set in accordance with the calculation result.
 また、情報処理端末の機器種別を判別して予測軌跡の描画範囲を決定する。あるいは文字描画を行うアプリか、絵の描画を実行するアプリか等、実行アプリケーションに応じて、予測軌跡の描画範囲を決定する構成としてもよい。 Also, the drawing type of the predicted trajectory is determined by determining the device type of the information processing terminal. Or it is good also as a structure which determines the drawing range of a prediction locus | trajectory according to execution applications, such as an application which draws a character, or an application which performs drawing of a picture.
 さらに、過去の描画軌跡を用いた類似判定処理等を行う場合、過去の類似軌跡をその軌跡を描画したユーザの識別子であるユーザIDに対応付けてメモリに格納する構成とし、類似判定や予測軌跡の描画を行なう場合に、同一ユーザの過去の軌跡情報を選択して利用する構成としてもよい。
 また、機器の利用に際して、ユーザが右ききであるか左ききであるかを判別、あるいは入力する設定として、利き手情報に応じて予測軌跡の描画設定を変更する構成としてもよい。
Further, when similarity determination processing using a past drawing trajectory or the like is performed, the past similar trajectory is stored in the memory in association with the user ID that is the identifier of the user who has drawn the trajectory, and the similarity determination or predicted trajectory is stored. When drawing the above, it is possible to select and use past trajectory information of the same user.
Further, when using the device, the setting for determining whether or not the user is right-handed or left-handed or changing the drawing setting of the predicted trajectory according to the dominant hand information may be adopted.
 ハードウェアの構成から遅延時間を判断し、予測フレーム数を決定する方法としては、例えば、以下の処理が適用できる。
 まず、情報処理装置端末の製品番号、端末に用いられているタッチパネルのID、タッチパネルドライバのID、グラフィックチップのIDなどの情報を取得する。
 次に、これらのIDと遅延時間との対応データを格納したデータベースと照合して遅延時間を推定し、予測フレーム数を決定する。
 なお、この際に参照するデータベースは、情報処理端末内に格納してもよいし、ネットワーク上のサーバに置く構成としてもよい。サーバを利用する場合は、取得したID情報を端末からサーバに送信し、推定遅延時間または予測フレーム数をサーバから受信する。
 また、遅延時間を実測して予測フレーム数を決定する構成としてもよい。すなわち、入力デバイスによる入力検出から描画完了までの時間を内部で計測し、計測される時間に基づき予測フレーム数を決定する構成としてもよい。なお、入力デバイスによる入力検出および描画完了の検出は、タッチ検出ドライバ、グラフィックドライバの信号に基づき推定してもよいし、カメラ撮像される画像を基に画像認識することにより推定してもよい。
As a method for determining the delay time from the hardware configuration and determining the number of predicted frames, for example, the following processing can be applied.
First, information such as the product number of the information processing apparatus terminal, the ID of the touch panel used in the terminal, the ID of the touch panel driver, and the ID of the graphic chip is acquired.
Next, the delay time is estimated by collating with a database storing correspondence data of these IDs and delay times, and the number of predicted frames is determined.
The database referred to at this time may be stored in the information processing terminal, or may be configured to be placed on a server on the network. When the server is used, the acquired ID information is transmitted from the terminal to the server, and the estimated delay time or the predicted number of frames is received from the server.
Further, the configuration may be such that the number of predicted frames is determined by actually measuring the delay time. That is, the time from input detection by the input device to the completion of drawing may be measured internally, and the number of predicted frames may be determined based on the measured time. Note that input detection and detection of drawing completion by the input device may be estimated based on signals from the touch detection driver and the graphic driver, or may be estimated by recognizing an image based on an image captured by the camera.
 なお、軌跡予測処理を実行するか否かを入力デバイスに応じて切り替え可能な構成としてもよい。
 例えば入力デバイスが専用ペンである場合は、軌跡予測を実行し、入力デバイスが指(タッチ入力)の場合は、軌跡予測を実行しないとする設定としてもよい。
 入力デバイスとしてはその他、例えばマウス等もあり、各種の入力デバイスに応じて軌跡予測を実行するかしないかを切り替え可能な構成とすることができる。
In addition, it is good also as a structure which can be switched according to an input device whether a locus | trajectory prediction process is performed.
For example, when the input device is a dedicated pen, trajectory prediction is executed, and when the input device is a finger (touch input), the trajectory prediction may not be executed.
Other input devices include, for example, a mouse, and can be configured to switch whether or not to perform trajectory prediction according to various input devices.
 また、ユーザ設定によって軌跡予測処理を実行するか否かを設定可能な構成としてもよい。
 さらに、予測軌跡と実軌跡との差分を、逐次算出し、差分が予め規定した閾値以上となる場合には、軌跡予測を停止(OFF)とするという制御を実行する構成としてもよい。
Moreover, it is good also as a structure which can set whether a locus | trajectory prediction process is performed by user setting.
Furthermore, the difference between the predicted trajectory and the actual trajectory may be sequentially calculated, and when the difference is equal to or greater than a predetermined threshold, control for stopping the trajectory prediction (OFF) may be performed.
 また、軌跡予測を実行して、描画アプリケーションを終了する場合には、軌跡予測に適用した学習結果データをアプリの利用ユーザIDに対応付けたログデータとしてメモリに保存する。このような処理を行なうことで、再度、アプリケーションを立ち上げた際に、ユーザIDを入力することで、メモリに保存されたユーザ対応の学習データを利用可能となり、ユーザ対応の学習結果を利用した軌跡予測を即時実行可能となる。 Also, when the trajectory prediction is executed and the drawing application is terminated, the learning result data applied to the trajectory prediction is stored in the memory as log data associated with the user ID of the application. By performing such processing, when the application is started up again, the user-corresponding learning data stored in the memory can be used by inputting the user ID, and the user-corresponding learning result is used. Trajectory prediction can be executed immediately.
  [7.予測軌跡の描画態様を制御する実施例について]
 次に、予測軌跡の描画態様を制御する実施例について説明する。
 上述したように、本開示の処理では、描画済みの軌跡(実軌跡)から学習により未来の軌跡を予測して、予測軌跡を描画する処理を実行する。
 しかし、表示部に表示される予測軌跡は、実際の軌跡からずれてしまう場合もある。このように、実際の軌跡とずれる可能性のある「予測軌跡」を、実軌跡に対応する「描画済み軌跡」と同様の態様で表示すると、ユーザにとってはかえって迷惑となる場合がある。
 以下では、このような誤りの可能性のある「予測軌跡」の表示態様を実軌跡である「描画済み軌跡」と異なる態様で表示する実施例について説明する。
[7. Example of controlling drawing mode of predicted trajectory]
Next, an embodiment for controlling the drawing mode of the predicted trajectory will be described.
As described above, in the process of the present disclosure, a process of drawing a predicted trajectory is performed by predicting a future trajectory through learning from a drawn trajectory (actual trajectory).
However, the predicted trajectory displayed on the display unit may deviate from the actual trajectory. As described above, if the “predicted trajectory” that may be shifted from the actual trajectory is displayed in the same manner as the “drawn trajectory” corresponding to the actual trajectory, it may be annoying for the user.
In the following, an embodiment will be described in which the display mode of the “predicted trajectory” having such a possibility of error is displayed in a mode different from the “drawn trajectory” that is the actual trajectory.
 予測軌跡の表示制御処理手法について、下記の5つの処理例に分けて、順次説明する。
 (処理例1) 実軌跡対応の「描画済み軌跡」を実線として表示し、「予測軌跡」を実線より目立たない様に、色、透過率、太さの少なくともいずれかを変更して表示する。
 (処理例2) 1フレームごとに予測軌跡を更新する。すなわち、表示した予測軌跡を、次の描画フレームでは消去して新たな予測軌跡を描画する。
 (処理例3) 予測軌跡の信頼度指標値に応じて長さ、色、透過率、太さの少なくともいずれかを変更する。
 (処理例4) 状況に応じて予測軌跡の描画をオン/オフ、すなわち表示または非表示の切り替えを行う。
 (処理例5) 予測軌跡が実軌跡から外れた場合、予測軌跡の表示態様を変更して予測軌跡を目立たなくする制御を行う。
The predicted trajectory display control processing method will be described sequentially in the following five processing examples.
(Processing Example 1) The “drawn locus” corresponding to the actual trajectory is displayed as a solid line, and the “predicted trajectory” is displayed by changing at least one of the color, the transmittance, and the thickness so as not to stand out from the solid line.
(Processing Example 2) The predicted trajectory is updated every frame. That is, the displayed predicted trajectory is erased in the next drawing frame, and a new predicted trajectory is drawn.
(Processing Example 3) At least one of length, color, transmittance, and thickness is changed according to the reliability index value of the predicted trajectory.
(Processing Example 4) Depending on the situation, drawing of the predicted trajectory is turned on / off, that is, switching between display and non-display is performed.
(Processing Example 5) When the predicted trajectory deviates from the actual trajectory, control is performed to make the predicted trajectory inconspicuous by changing the display mode of the predicted trajectory.
 なお、予測軌跡の決定処理は、先に説明した処理と同様である。以下に説明する表示制御処理例では、先に説明した予測軌跡の推定処理に従って決定した予測軌跡の表示態様を制御する処理を行なう。 Note that the process of determining the predicted trajectory is the same as the process described above. In the display control processing example described below, processing for controlling the display mode of the predicted trajectory determined in accordance with the predicted trajectory estimation processing described above is performed.
 図11は、先に説明した図9と同様の図であり、予測軌跡の決定処理に適用する複数(k本)の補助予測軌跡と補助予測軌跡の各座標位置の標準偏差の一例を示す図である。
 図11には入力デバイスである専用ペンの実際の軌跡である実軌跡に従って表示された描画済み軌跡100と、前述の実施例、すなわち例えば図5に示すフローに従った処理によって算出された補助予測軌跡103と、補助予測軌跡103の平均軌跡として算出された予測軌跡105を示している。
FIG. 11 is a diagram similar to FIG. 9 described above, and shows an example of a plurality of (k) auxiliary prediction trajectories applied to the prediction trajectory determination process and the standard deviation of each coordinate position of the auxiliary prediction trajectory. It is.
FIG. 11 shows a drawn trajectory 100 displayed according to an actual trajectory that is an actual trajectory of a dedicated pen as an input device, and auxiliary prediction calculated by the above-described embodiment, for example, processing according to the flow shown in FIG. A trajectory 103 and a predicted trajectory 105 calculated as an average trajectory of the auxiliary predicted trajectory 103 are shown.
 描画済み軌跡100の先端は座標(x,y)の最新描画位置102であり、最新描画位置102はフレームtにおいて表示された最新描画位置102である。
 フレームt-1~フレームtまでの描画軌跡が長さsの直前軌跡である。
The tip of the drawn trajectory 100 is the latest drawing position 102 of coordinates (x, y), and the latest drawing position 102 is the latest drawing position 102 displayed in the frame t.
The drawing locus from frame t-1 to frame t is the locus immediately before the length s.
 先に説明したようにkNN法に基づいて、任意の時点においてk本の補助予測軌跡103が計算される。k本の予測軌跡は、状況に応じてばらつき具合が異なってくる。複数の補助予測軌跡103のばらつきが大きいということは、「過去の軌跡の中で、今回の軌跡に近い最適なものの確度が低い。すなわち、k本の平均値を取って設定される予測軌跡105が実際の軌跡に等しくなる確率が低いということである。 As described above, k auxiliary prediction trajectories 103 are calculated at an arbitrary time point based on the kNN method. The k prediction trajectories have different variations depending on the situation. The large variation in the plurality of auxiliary prediction trajectories 103 means that “the accuracy of the optimum past trajectory that is close to the current trajectory is low. That is, the prediction trajectory 105 set by taking k average values. Is less likely to be equal to the actual trajectory.
 このばらつきの度合いを示す指標として補助予測軌跡k本の各座標位置の標準偏差σを算出する。図11に示す円(補助予測軌跡の標準偏差104-U1~U3)は、直前軌跡101の表示フレームtの後の未来フレームU=t+1,t+2,t+3における複数(k個)の補助予測軌跡の座標点の標準偏差の大きさを概念的に示した円である。
 円の大きさが大きいほど標準偏差が大きい、すなわち補助予測軌跡のばらつきが大きく、平均値として算出される予定の最終決定予測軌跡が不確かであることを示している。
The standard deviation σ of each coordinate position of the k auxiliary prediction trajectories is calculated as an index indicating the degree of variation. The circles (standard deviations 104-U1 to U3 of the auxiliary prediction trajectory) shown in FIG. 11 indicate a plurality (k) of auxiliary prediction trajectories in the future frames U = t + 1, t + 2, and t + 3 after the display frame t of the immediately preceding trajectory 101. A circle conceptually showing the size of the standard deviation of coordinate points.
The larger the size of the circle, the larger the standard deviation, that is, the greater the variation in the auxiliary predicted trajectory, indicating that the final determined predicted trajectory to be calculated as an average value is uncertain.
 直前軌跡101の最終位置がフレームtにおいて表示された軌跡の座標位置に対応し、そのあとの小さい円104-U1が、未来フレームU=t+1の複数の補助予測軌跡によって算出される複数の予測座標の標準偏差を示している。
 次の円104-U2が未来フレームU=t+2の複数の補助予測軌跡によって算出される複数の予測座標の標準偏差を示している。
 次の円104-U3が未来フレームU=t+3の複数の補助予測軌跡によって算出される複数の予測座標の標準偏差を示している。
 本実施例では、この標準偏差を予測軌跡対応の信頼度指標値として用い、例えば信頼度指標値に応じた表示制御を実行する。
The final position of the previous trajectory 101 corresponds to the coordinate position of the trajectory displayed in the frame t, and the subsequent small circle 104-U1 has a plurality of predicted coordinates calculated by a plurality of auxiliary predicted trajectories in the future frame U = t + 1. The standard deviation is shown.
The next circle 104-U2 shows the standard deviation of the plurality of predicted coordinates calculated by the plurality of auxiliary prediction trajectories of the future frame U = t + 2.
The next circle 104-U3 shows the standard deviation of the plurality of predicted coordinates calculated by the plurality of auxiliary prediction trajectories of the future frame U = t + 3.
In this embodiment, this standard deviation is used as a reliability index value corresponding to the predicted trajectory, and display control corresponding to the reliability index value is executed, for example.
 図12は、各未来フレームU=t+1,t+2,t+3の信頼度の算出例を示す図である。
 図12に示すように、フレームtにおいて表示された最新描画位置(x,y)102の先に前述の図5に示すフローに従った処理によって算出された予測軌跡105が描画される。最新描画位置102はフレームtにおいて表示された最新描画位置102である。フレームt-1~フレームtまでの描画軌跡が長さsの直前軌跡である。
FIG. 12 is a diagram illustrating a calculation example of the reliability of each future frame U = t + 1, t + 2, t + 3.
As shown in FIG. 12, the predicted trajectory 105 calculated by the process according to the flow shown in FIG. 5 is drawn ahead of the latest drawing position (x, y) 102 displayed in the frame t. The latest drawing position 102 is the latest drawing position 102 displayed in the frame t. The drawing locus from frame t-1 to frame t is the locus immediately before the length s.
 フレームtにおいて表示された最新描画位置(x,y)102の先に前述の図5に示すフローに従った処理によって算出された予測軌跡105が描画される。
 予測軌跡は、以下の各点を結ぶ設定で設定される。
 未来フレームu=t+1の予測座標(px1,py1)からなる予測点111、
 未来フレームu=t+2の予測座標(px2,py2)からなる予測点112、
 未来フレームu=t+3の予測座標(px3,py3)からなる予測点113、
 これらの各点を結ぶ設定で予測軌跡105が設定され表示される。
The predicted trajectory 105 calculated by the process according to the flow shown in FIG. 5 is drawn ahead of the latest drawing position (x, y) 102 displayed in the frame t.
The predicted trajectory is set by setting to connect the following points.
A predicted point 111 consisting of predicted coordinates (px1, py1) of the future frame u = t + 1,
A predicted point 112 consisting of predicted coordinates (px2, py2) of the future frame u = t + 2,
A predicted point 113 composed of predicted coordinates (px3, py3) of the future frame u = t + 3,
The predicted trajectory 105 is set and displayed with the setting connecting these points.
 この予測軌跡105は、先に図5、図6を参照して説明したように、複数の類似軌跡に応じて算出される補助予測軌跡の平均軌跡として算出された軌跡である。
 すなわち、先に図6を参照して説明したように、描画済み軌跡100中から、最新描画位置102までの直前軌跡を含む領域に類似する類似軌跡を複数個(k個)抽出し、この類似軌跡に基づいて設定したk個の補助予測軌跡の平均値が予測軌跡105に設定される。
The predicted trajectory 105 is a trajectory calculated as an average trajectory of auxiliary predicted trajectories calculated according to a plurality of similar trajectories as described above with reference to FIGS.
That is, as described above with reference to FIG. 6, a plurality (k) of similar trajectories similar to the region including the previous trajectory up to the latest drawing position 102 are extracted from the drawn trajectory 100, and this similarity is extracted. The average value of the k auxiliary predicted trajectories set based on the trajectory is set as the predicted trajectory 105.
 図11を参照して説明したように、抽出したk本の補助予測軌跡は、所定のばらつきを持つ。このk本の補助予測軌跡のばらつき度合いを標準偏差σとして算出し、これを信頼度指標値として用いる。
 例えば、フレームtにおいて表示される最新描画位置(x,y)の先の予測軌跡105上の各点の信頼度指標値(=標準偏差)は、図12に示すような設定となる。
As described with reference to FIG. 11, the extracted k auxiliary prediction trajectories have a predetermined variation. The degree of variation of the k auxiliary prediction trajectories is calculated as a standard deviation σ, and this is used as a reliability index value.
For example, the reliability index value (= standard deviation) of each point on the predicted trajectory 105 ahead of the latest drawing position (x, y) displayed in the frame t is set as shown in FIG.
 k個の補助予測軌跡対応の座標位置に基づいて算出される標準偏差、すなわち信頼度指標値は、以下のような設定となる。
 未来フレームu=t+1の予測座標(px1,py1)からなる予測点111の信頼度指標値:SD[1]=0.08、
 未来フレームu=t+2の予測座標(px2,py2)からなる予測点112の信頼度指標値:SD[2]=3.75、
 未来フレームu=t+3の予測座標(px3,py3)からなる予測点113の信頼度指標値:SD[3]=8.52、
 このように各フレーム対応の信頼度指標値を算出する。
 なお、信頼度指標値は、前述したように補助予測軌跡のばらつき度合いを示す標準偏差に対応し、値が小さいほど信頼度が高く、値が大きいと信頼度が低いことを示す値である。
The standard deviation calculated based on the coordinate positions corresponding to the k auxiliary prediction trajectories, that is, the reliability index value is set as follows.
Reliability index value of prediction point 111 consisting of prediction coordinates (px1, py1) of future frame u = t + 1: SD [1] = 0.08,
Reliability index value of the prediction point 112 composed of the predicted coordinates (px2, py2) of the future frame u = t + 2: SD [2] = 3.75
Reliability index value of prediction point 113 consisting of prediction coordinates (px3, py3) of future frame u = t + 3: SD [3] = 8.52.
Thus, the reliability index value corresponding to each frame is calculated.
The reliability index value corresponds to the standard deviation indicating the degree of variation of the auxiliary prediction trajectory as described above, and is a value indicating that the smaller the value is, the higher the reliability is, and the larger the value is, the lower the reliability is.
 これら、各未来フレームuに対応する予測軌跡の信頼度指標値SD[u]に応じて予測軌跡の表示態様を制御する。
 なお、uはフレーム番号として説明するが、uを時間に設定した処理を行なうことも可能である。
The display mode of the predicted trajectory is controlled according to the reliability index value SD [u] of the predicted trajectory corresponding to each future frame u.
Although u is described as a frame number, it is also possible to perform processing in which u is set as time.
 この信頼度指標値に応じた表示制御は、具体的には、前述の5つの処理態様、すなわち、以下の5つの処理例のいずれか、または組み合わせとして実行可能である。
 (処理例1)実軌跡対応の「描画済み軌跡」を実線として表示し、「予測軌跡」を実線より目立たない様に、色、透過率、太さの少なくともいずれかを変更して表示する。
 (処理例2)1フレームごとに予測軌跡を更新する。すなわち、表示した予測軌跡を、次の描画フレームでは消去して新たな予測軌跡を描画する。
 (処理例3)予測軌跡の信頼度指標値に応じて長さ、色、透過率、太さの少なくともいずれかを変更する。
 (処理例4)状況に応じて予測軌跡の描画をオン/オフ、すなわち表示または非表示の切り替えを行う。
 (処理例5)予測軌跡が実軌跡から外れた場合、予測軌跡の表示態様を変更して予測軌跡を目立たなくする制御を行う。
Specifically, the display control according to the reliability index value can be executed as any one or a combination of the five processing modes described above, that is, the following five processing examples.
(Processing Example 1) The “drawn locus” corresponding to the actual trajectory is displayed as a solid line, and the “predicted trajectory” is displayed by changing at least one of the color, the transmittance, and the thickness so as not to stand out from the solid line.
(Processing Example 2) The predicted trajectory is updated every frame. That is, the displayed predicted trajectory is erased in the next drawing frame, and a new predicted trajectory is drawn.
(Processing Example 3) At least one of length, color, transmittance, and thickness is changed according to the reliability index value of the predicted trajectory.
(Processing Example 4) Depending on the situation, drawing of the predicted trajectory is turned on / off, that is, switching between display and non-display is performed.
(Processing Example 5) When the predicted trajectory deviates from the actual trajectory, control is performed to make the predicted trajectory inconspicuous by changing the display mode of the predicted trajectory.
 以下、各処理例の具体的態様と効果について説明する。
  (処理例1)
 処理例1は、実軌跡対応の「描画済み軌跡」を実線として表示し、「予測軌跡」を実線より目立たない様に、色、透過率、太さの少なくともいずれかを変更して表示するものである。
Hereinafter, specific modes and effects of each processing example will be described.
(Processing example 1)
Processing example 1 displays the “drawn locus” corresponding to the actual locus as a solid line, and displays the “predicted locus” by changing at least one of the color, the transmittance, and the thickness so as not to stand out from the solid line. It is.
 具体的な表示制御例について、図13を参照して説明する。
 図13には、以下の各軌跡の表示態様例を示している。
 (1)描画済み軌跡(実軌跡対応)
 (2)予測軌跡
 これらの2つの軌跡について、以下のような表示制御を行う。
 (A)表示する色を制御し、(1)描画済み軌跡(実軌跡対応)=黒の実線として表示し、(2)予測軌跡=黒以外(例えば赤)の実線として表示する。
 (B)表示する線の透過率を制御し、(1)描画済み軌跡(実軌跡対応)=透過しない黒の実線(透過率0%)として表示し、(2)予測軌跡=透過する黒の実線(例えば透過率50%)として表示する。
 (C)表示する線の太さを制御し、(1)描画済み軌跡(実軌跡対応)=黒の実線(太線)として表示し、(2)予測軌跡=黒の実線(細線)として表示する。
 例えば、上記(A)~(C)のいずれか、または組み合わせた表示制御を行う。
A specific display control example will be described with reference to FIG.
FIG. 13 shows examples of display modes of the following trajectories.
(1) Trajectory already drawn (corresponding to actual trajectory)
(2) Predicted trajectory The following display control is performed for these two trajectories.
(A) The color to be displayed is controlled, and (1) the drawn trajectory (corresponding to the actual trajectory) is displayed as a black solid line, and (2) the predicted trajectory is displayed as a solid line other than black (for example, red).
(B) The transmittance of the line to be displayed is controlled, and (1) the drawn locus (corresponding to the actual locus) = displayed as a black solid line that does not transmit (transmittance 0%), and (2) the predicted locus = transparent black Displayed as a solid line (for example, transmittance 50%).
(C) Control the thickness of the line to be displayed, (1) Display as a drawn trajectory (corresponding to a real trajectory) = black solid line (thick line), (2) Display as a predicted trajectory = black solid line (thin line) .
For example, display control of any one of (A) to (C) or a combination thereof is performed.
 具体的な表示例を図14に示す。図14に示す例は、図13(C)の表示制御例に対応する例である。すなわち、表示する線の太さを制御する例であり、(1)描画済み軌跡(実軌跡対応)=黒の実線(太線)として表示し、(2)予測軌跡=黒の実線(細線)として表示した制御例である。 Fig. 14 shows a specific display example. The example shown in FIG. 14 is an example corresponding to the display control example of FIG. In other words, this is an example of controlling the thickness of a line to be displayed. (1) Displayed as a drawn trajectory (corresponding to a real trajectory) = black solid line (thick line), (2) As a predicted trajectory = black solid line (thin line) It is the displayed control example.
 このように、(1)描画済み軌跡(実軌跡対応)と、(2)予測軌跡とを異なる態様で表示する表示制御を行うことで、ユーザ(描画者)は、実軌跡に対応する描画済み軌跡と、予測軌跡とを明確に区別して認識することが可能となり、予測軌跡に惑わされない描画処理が可能となる。 In this way, by performing display control in which (1) a drawn trajectory (corresponding to an actual trajectory) and (2) a predicted trajectory are displayed in different modes, the user (render) can draw the corresponding trajectory. The trajectory and the predicted trajectory can be clearly distinguished and recognized, and drawing processing that is not confused by the predicted trajectory is possible.
 さらに、この(処理例1)の表示制御を発展させて、信頼度に応じて予測軌跡の表示態様を設定としてもよい。 Further, the display control of this (Processing Example 1) may be developed to set the display mode of the predicted trajectory according to the reliability.
 具体的な表示制御例について、図15を参照して説明する。
 図15には、図13と同様、以下の各軌跡の表示態様例を示している。
 (1)描画済み軌跡(実軌跡対応)
 (2)予測軌跡
 これらの2つの軌跡について、以下のような表示制御を行う。
A specific display control example will be described with reference to FIG.
FIG. 15 shows a display example of the following trajectories as in FIG.
(1) Trajectory already drawn (corresponding to actual trajectory)
(2) Predicted trajectory The following display control is performed for these two trajectories.
 (A)表示する色を制御し、(1)描画済み軌跡(実軌跡対応)=黒の実線として表示し、(2)予測軌跡=黒以外の実線として表示する。
 さらに、予測軌跡については、信頼度に応じて色を変更する。
 例えば信頼度が高い予測軌跡は赤、信頼度が低い予測軌跡は黄色等、信頼度に応じて色を変化させる設定とする。
(A) The color to be displayed is controlled, (1) the drawn locus (corresponding to the actual locus) = displayed as a black solid line, and (2) the predicted locus = displayed as a solid line other than black.
Further, the color of the predicted trajectory is changed according to the reliability.
For example, the prediction trajectory with high reliability is red, and the prediction trajectory with low reliability is yellow, and the color is changed according to the reliability.
 (B)表示する線の透過率を制御し、(1)描画済み軌跡(実軌跡対応)=透過しない黒の実線(透過率0%)として表示し、(2)予測軌跡=透過する黒の実線として表示する。
 さらに、予測軌跡については、信頼度に応じて透過率を変更する。
 例えば信頼度が高い予測軌跡は透過率を低く設定し、信頼度が低い予測軌跡は透過率を高く設定する。
(B) The transmittance of the line to be displayed is controlled, and (1) the drawn locus (corresponding to the actual locus) = displayed as a black solid line that does not transmit (transmittance 0%), and (2) the predicted locus = transparent black Display as a solid line.
Further, for the predicted trajectory, the transmittance is changed according to the reliability.
For example, the predicted trajectory with high reliability sets the transmittance low, and the predicted trajectory with low reliability sets the transmittance high.
 (C)表示する線の太さを制御し、(1)描画済み軌跡(実軌跡対応)=黒の実線(太線)として表示し、(2)予測軌跡=描画済み軌跡より細い黒の実線として表示する。
 さらに、予測軌跡については、信頼度に応じて太さを変更する。
 例えば信頼度が高い予測軌跡は中太線、信頼度が低い予測軌跡は細線等、信頼度に応じて線の太さを変化させる設定とする。
 例えば、上記(A)~(C)のいずれか、または組み合わせた表示制御を行う。
(C) Control the thickness of the line to be displayed, (1) Display as a drawn trajectory (corresponding to a real trajectory) = black solid line (thick line), (2) Prediction trajectory = A black solid line thinner than the drawn trajectory indicate.
Further, the thickness of the predicted trajectory is changed according to the reliability.
For example, the prediction trajectory with high reliability is a medium thick line, the prediction trajectory with low reliability is a thin line, etc., and the line thickness is changed according to the reliability.
For example, display control of any one of (A) to (C) or a combination thereof is performed.
 具体的な表示例を図16に示す。図16に示す例は、図15(C)の表示制御例に対応する例である。すなわち、表示する線の太さを制御する例であり、(1)描画済み軌跡(実軌跡対応)=黒の実線(太線)として表示し、(2)予測軌跡=黒の実線として、さらに信頼度が高い予測軌跡は中太線、信頼度が低い予測軌跡は細線として表示した制御例である。 A specific display example is shown in FIG. The example shown in FIG. 16 is an example corresponding to the display control example of FIG. In other words, this is an example of controlling the thickness of a line to be displayed. (1) Displayed as a drawn locus (corresponding to a real locus) = black solid line (thick line), and (2) a predicted locus = black solid line. This is a control example in which a predicted trajectory with a high degree is displayed as a medium thick line, and a predicted trajectory with a low degree of reliability is displayed as a thin line.
 このように、(1)描画済み軌跡(実軌跡対応)と、(2)予測軌跡とを異なる態様とし、さらに、予測軌跡については、その信頼度に応じて異なる表示とする表示制御を行うことで、ユーザ(描画者)は、実軌跡に対応する描画済み軌跡と、予測軌跡とを明確に区別し、さらに予測軌跡の信頼度を確認することが可能となる。 As described above, (1) the drawn trajectory (corresponding to the actual trajectory) and (2) the predicted trajectory are different from each other, and the display control is performed so that the predicted trajectory is displayed differently depending on the reliability. Thus, the user (drawer) can clearly distinguish the drawn trajectory corresponding to the actual trajectory from the predicted trajectory, and further confirm the reliability of the predicted trajectory.
  (処理例2)
 次に処理例2について説明する。処理例2は、1フレームごとに予測軌跡を更新する。すなわち、表示した予測軌跡を、次の描画フレームでは消去して新たな予測軌跡を描画する処理例である。
(Processing example 2)
Next, processing example 2 will be described. Processing example 2 updates the predicted trajectory for each frame. That is, in this example, the displayed predicted trajectory is erased in the next drawing frame to draw a new predicted trajectory.
 具体的な表示例について図17を参照して説明する。
 図17には、以下の2つの連続フレームの表示例を示している。
 (a)フレームnの表示例
 (b)フレームn+1の表示例
A specific display example will be described with reference to FIG.
FIG. 17 shows a display example of the following two consecutive frames.
(A) Display example of frame n (b) Display example of frame n + 1
 (a)に示すフレームnの表示例では、実軌跡に対応する描画済み軌跡100の最新描画位置102から先に予測点1,111、予測点2,112、予測点3,113を結ぶ予測軌跡105が表示されている。なお、この時点の実軌跡は図に示す実軌跡121であるとする。この実軌跡121は表示されていない。 In the display example of the frame n shown in (a), the predicted trajectory connecting the predicted points 1, 111, the predicted points 2, 112, and the predicted points 3, 113 first from the latest drawing position 102 of the drawn trajectory 100 corresponding to the actual trajectory. 105 is displayed. It is assumed that the actual locus at this time is an actual locus 121 shown in the figure. This real locus 121 is not displayed.
 次のフレームである図に示す(b)フレームn+1の表示例では、最新描画位置は、更新され、更新最新描画位置122となる。この位置は、フレームnにおける予測点1,111とほぼ一致する位置となる。 In the display example of (b) frame n + 1 shown in the figure which is the next frame, the latest drawing position is updated to become the updated latest drawing position 122. This position is a position that substantially coincides with the prediction points 1 and 111 in the frame n.
 さらに、予測軌跡105の各予測点もそれぞれ更新され、フレームnより先の位置に各予測点が更新予測点1~3,131~133として設定され、これらの更新予測点を結ぶように予測軌跡105が表示されることになる。 Furthermore, each prediction point of the prediction trajectory 105 is also updated, and each prediction point is set as update prediction points 1 to 3 and 131 to 133 at positions ahead of the frame n, and the prediction trajectory is connected so as to connect these update prediction points. 105 is displayed.
 このように、(処理例2)では毎フレーム毎に予測線を消して、新たな点として再描画する。これにより画面のリフレッシュレートに応じた長さで予測線が更新されていくことになり、毎フレーム毎に確度の高い予測線に置き換えた表示が実現される。 In this way, in (Processing Example 2), the prediction line is erased every frame and redrawn as a new point. As a result, the prediction line is updated with a length corresponding to the refresh rate of the screen, and a display in which the prediction line is replaced with a highly accurate prediction line for each frame is realized.
  (処理例3)
 次に処理例3について説明する。処理例3は、予測軌跡の信頼度指標値に応じて長さ、色、透過率、太さの少なくともいずれかを変更する処理例である。
 この(処理例3)は(処理例1)の拡張であり、予測軌跡の信頼度指標値に応じて表示する長さを変更する表示制御を行うものである。
 信頼度指標値は、先に図11、図12を参照して説明した補助予測軌跡の標準偏差に相当する値であり、値が小さいほど予測軌跡の信頼度が高く、値がおおきいほど予測軌跡の信頼度が低いことを示す。
(Processing example 3)
Next, processing example 3 will be described. Processing example 3 is a processing example in which at least one of length, color, transmittance, and thickness is changed according to the reliability index value of the predicted trajectory.
This (Processing Example 3) is an extension of (Processing Example 1) and performs display control for changing the display length according to the reliability index value of the predicted trajectory.
The reliability index value is a value corresponding to the standard deviation of the auxiliary prediction trajectory described above with reference to FIGS. 11 and 12, and the smaller the value, the higher the reliability of the prediction trajectory, and the larger the value, the prediction trajectory. The reliability of is low.
 信頼度指標値の値が大きい場合は、実際の軌跡から外れている可能性が高い。このような場合は、予測軌跡を表示しない設定とする。
 具体的には、先に、図11、図12を参照して説明した各予測点の信頼度指標値(標準偏差)と予め設定したしきい値(信頼度しきい値)を比較し、信頼値指標値が所定のしきい値を下回っていない場合、その予測点に至る直前の予測軌跡を描画しない対応を行う。
When the value of the reliability index value is large, there is a high possibility that it is out of the actual trajectory. In such a case, the predicted trajectory is not displayed.
Specifically, the reliability index value (standard deviation) of each prediction point described above with reference to FIGS. 11 and 12 is compared with a preset threshold value (reliability threshold value), and reliability is determined. If the value index value does not fall below the predetermined threshold value, a response is made not to draw the prediction trajectory immediately before reaching the prediction point.
 この表示制御処理例について、図18を参照して説明する。
 図18において、表示予定の予測軌跡105を構成する3つの予測点の信頼度指標値が以下のように算出されたものとする。
 (a)予測点1,111の信頼度SD[1]=0.08
 (b)予測点2,112の信頼度SD[2]=3.75
 (c)予測点3,113の信頼度SD[3]=8.52
 なお、上記の信頼度指標値は、先に図11、図12を参照して説明したように、各予測点を算出するために用いた類似軌跡対応の補助予測軌跡の座標位置の標準偏差σに対応し、値が小さいほど信頼度が高いことを意味する。
An example of this display control process will be described with reference to FIG.
In FIG. 18, it is assumed that the reliability index values of the three prediction points constituting the predicted trajectory 105 scheduled to be displayed are calculated as follows.
(A) Reliability SD [1] of prediction point 1,111 = 0.08
(B) Reliability SD [2] of prediction point 2,112 = 3.75
(C) Reliability SD [3] of predicted point 3,113 = 8.52
The reliability index value is the standard deviation σ of the coordinate position of the auxiliary prediction trajectory corresponding to the similar trajectory used to calculate each prediction point, as described above with reference to FIGS. The smaller the value, the higher the reliability.
 ここで、予測軌跡の表示、非表示を決定するしきい値は、描画済み軌跡中の直前軌跡の速度、すなわち、図18に示す直前軌跡161の移動速度に応じて変動する値とする。
 具体的には、予め規定した定数αに直前軌跡161の移動速度Vtを乗じた値、すなわち、
 α×Vt
 をしきい値として用いる。
 αは、予め設定した係数である。
Here, the threshold value for determining whether or not to display the predicted trajectory is a value that varies according to the speed of the previous trajectory in the drawn trajectory, that is, the moving speed of the previous trajectory 161 shown in FIG.
Specifically, a value obtained by multiplying a predetermined constant α by the moving speed Vt of the immediately preceding locus 161, that is,
α × Vt
Is used as a threshold value.
α is a preset coefficient.
 なお、Vtは、例えば直前軌跡の1フレーム間の移動距離sを適用することが可能である。すなわち、移動速度を1フレーム間の移動距離として定義して、図18に示すフレーム間の軌跡移動距離sを適用し、
 α×s
 をしきい値としてもよい。
For Vt, for example, the moving distance s between one frame of the immediately preceding locus can be applied. That is, the moving speed is defined as the moving distance between one frame, and the locus moving distance s between frames shown in FIG. 18 is applied.
α × s
May be used as a threshold value.
 例えば、上記のしきい値α×sと、予測軌跡上の予測点の信頼度指標値(標準偏差)SD[u]を比較する。なお。uは最新描画位置の表示フレームtの表示フレーム以降の未来フレーム番号を意味する。図18では、uは最新描画位置の表示フレームtをt=0として、未来フレーム1,2,3に対応する予測点の信頼度指標値をそれぞれSD[1]、SD[2]、SD[3]として示している。 For example, the above threshold value α × s is compared with the reliability index value (standard deviation) SD [u] of the predicted point on the predicted trajectory. Note that. u means a future frame number after the display frame of the display frame t at the latest drawing position. In FIG. 18, u represents the display frame t of the latest drawing position as t = 0, and the reliability index values of the prediction points corresponding to the future frames 1, 2, and 3 are SD [1], SD [2], SD [ 3].
 この各予測点の信頼度指標値と、しきい値α×sとの比較判定式、すなわち、
 SD[u]<α×s
 上記判定式を用いて、各予測点以降の予測軌跡を表示するか否かを決定する。
 上記判定式を満足する場合、すなわち、
 予測点の信頼度指標値SD[u]が、しきい値α×sより小さい値である場合、その予測点の信頼度は高いと判定し、その予測点までの予測軌跡を描画し表示する。
 一方、予測点の信頼度指標値SD[u]が、しきい値α×sより小さくない値である場合、その予測点の信頼度は低いと判定し、その予測点までの予測軌跡の描画表示を中止する。
A comparison judgment formula between the reliability index value of each prediction point and the threshold value α × s, that is,
SD [u] <α × s
Whether or not to display the predicted trajectory after each prediction point is determined using the determination formula.
When the above judgment formula is satisfied, that is,
When the reliability index value SD [u] of a prediction point is a value smaller than the threshold value α × s, it is determined that the reliability of the prediction point is high, and a prediction trajectory up to the prediction point is drawn and displayed. .
On the other hand, when the reliability index value SD [u] of the prediction point is a value not smaller than the threshold value α × s, it is determined that the reliability of the prediction point is low, and the prediction trajectory up to the prediction point is drawn. Cancel the display.
 例えば、図18に示す各予測点の信頼度指標値を用いた予測軌跡の描画判定は以下のような設定となる。
 なお、一例として、
 係数α=1.5
 直前軌跡の1フレーム間の移動距離s=4.0
 上記設定とする。この設定とした場合、図18に示す各予測点の信頼度指標値SD[u]と、しきい値との比較判定式、
  SD[u]<α×s
 上記比較判定式に各予測点の信頼度指標値を代入し、上記式を満足するか否かを判定する。
For example, the prediction trajectory drawing determination using the reliability index value of each prediction point shown in FIG. 18 is set as follows.
As an example,
Coefficient α = 1.5
Movement distance s = 4.0 between one frame of the immediately preceding locus
Set as above. When this setting is made, a comparison judgment formula between the reliability index value SD [u] of each prediction point shown in FIG.
SD [u] <α × s
The reliability index value of each prediction point is substituted into the comparison determination formula to determine whether or not the above formula is satisfied.
 予測点1は、信頼度指標値:SD[1]=0.08である。
 すなわち、
 SD[1]=0.08、
 α×s=1.5×4.0=6
 であり、
 SD[1]=0.08<6.0
 上記式が成立し、
  SD[u]<α×s
 上記判定式を満たす。
The prediction point 1 has a reliability index value: SD [1] = 0.08.
That is,
SD [1] = 0.08,
α × s = 1.5 × 4.0 = 6
And
SD [1] = 0.08 <6.0
The above equation holds,
SD [u] <α × s
The above judgment formula is satisfied.
 また、予測点2は、信頼度指標値:SD[2]=3.75である。
 すなわち、
 SD[2]=3.75、
 α×s=1.5×4.0=6
 であり、
 SD[2]=3.75<6.0
 上記式が成立し、
  SD[u]<α×s
 上記判定式を満たす。
Further, the prediction point 2 has a reliability index value: SD [2] = 3.75.
That is,
SD [2] = 3.75,
α × s = 1.5 × 4.0 = 6
And
SD [2] = 3.75 <6.0
The above equation holds,
SD [u] <α × s
The above judgment formula is satisfied.
 また、予測点3は、信頼度指標値:SD[3]=8.52である。
 すなわち、
 SD[3]=8.52、
 α×s=1.5×4.0=6
 であり、
 SD[3]=8.52<6.0
 上記式が成立しない。すなわち、
  SD[u]<α×s
 上記判定式を満たさない。
Further, the prediction point 3 has a reliability index value: SD [3] = 8.52.
That is,
SD [3] = 8.52
α × s = 1.5 × 4.0 = 6
And
SD [3] = 8.52 <6.0
The above formula does not hold. That is,
SD [u] <α × s
The above judgment formula is not satisfied.
 このように、予測点1と予測点2の信頼度指標値SD[1],SD[2]は、しきい値αs=1.5×4.0=6.0より小さく、
   SD[u]<α×s
 上記判定式を満足し、信頼度が高いと判定し、これらの各点までの予測軌跡については、描画、表示を実行する。
 しかし、予測点3の信頼度指標値SD[3]は、しきい値αs=1.5×4.0=6.0より小さくなく、
   SD[u]<α×s
 上記判定式を満足せず、信頼度が低いと判定し、この予測点に至る直前の予測軌跡については、描画、表示を実行しないと決定する。
 図18に示すように、予測点2,112から予測点3,113に至る予測軌跡は、非表示予測軌跡151となる。
Thus, the reliability index values SD [1] and SD [2] of the prediction point 1 and the prediction point 2 are smaller than the threshold value αs = 1.5 × 4.0 = 6.0,
SD [u] <α × s
It is determined that the determination formula is satisfied and the reliability is high, and drawing and display are executed for the predicted trajectory up to these points.
However, the reliability index value SD [3] of the prediction point 3 is not smaller than the threshold value αs = 1.5 × 4.0 = 6.0,
SD [u] <α × s
It is determined that the determination formula is not satisfied and the reliability is low, and it is determined that drawing and display are not performed for the prediction trajectory immediately before reaching the prediction point.
As shown in FIG. 18, the predicted trajectory from the predicted points 2 112 to the predicted points 3 113 is a non-display predicted trajectory 151.
 この(処理例3)の処理においては、信頼度の低い予測軌跡は非表示となり、信頼度の高い予測軌跡のみが表示されるので、ユーザ(描画者)は、実軌跡により近い予測軌跡のみを観察した処理が可能となる。 In the processing of (Processing Example 3), the prediction trajectory with low reliability is not displayed, and only the prediction trajectory with high reliability is displayed. Therefore, the user (render) only displays the prediction trajectory closer to the actual trajectory. Observed processing is possible.
 なお、上述の(処理例3)の説明では、
   SD[u]<α×s
 上記判定式において係数α=1.5等の固定値として説明した。上記判定式を満足し、信頼度が高いと判定した場合は、これらの各点までの予測軌跡については、描画、表示を実行し、上記判定式を満足せず、信頼度が低いと判定した場合は、これらの各点までの予測軌跡の描画、表示を停止する処理例として説明した。
 すなわち、上記の処理例は、信頼度依存の表示制御であるが、この他の処理例として、信頼度に関わらず予測軌跡の先端にいくほど予測軌跡を薄く(透過度を高く)して表示する処理や、色をしだいに薄くして表示する処理、あるいは線をより細くして表示する制御を行う構成としてもよい。なお、この制御は、上記の判定式における係数αを予測軌跡の先端に向かうに従って変化させて判定する処理を行なうことでも実現可能である。
In the above description of (Processing Example 3),
SD [u] <α × s
In the above determination formula, the fixed value such as the coefficient α = 1.5 has been described. When it is determined that the above determination formula is satisfied and the reliability is high, drawing and display are executed for the predicted trajectory up to these points, and the above determination formula is not satisfied and it is determined that the reliability is low. The case has been described as an example of processing for stopping the drawing and display of the predicted trajectory up to these points.
In other words, the above processing example is reliability-dependent display control, but as another processing example, the predicted trajectory becomes thinner (higher transparency) as it goes to the tip of the predicted trajectory regardless of the reliability. It is also possible to adopt a configuration in which processing for displaying, processing for gradually decreasing the color, or control for displaying with thinner lines is possible. Note that this control can also be realized by performing determination processing by changing the coefficient α in the determination formula toward the tip of the predicted trajectory.
 (処理例4)
 次に処理例4について説明する。処理例4は、状況に応じて予測軌跡の描画をオン/オフ、すなわち表示または非表示の切り替えを行う処理例である。
(Processing example 4)
Next, processing example 4 will be described. Processing example 4 is a processing example in which drawing of a predicted trajectory is turned on / off, that is, switching between display and non-display is performed according to the situation.
 具体例について、図19以下を参照して説明する。
 図19には、この(処理例4)を実行する条件となる2つの検出状態を示している。
 例えば図19に示す(A),(B)いずれかの状態が検出された場合に、この(処理例4)に従って予測軌跡のオンオフ制御を実行する。すなわち、以下のような状況検出の場合である。
 (A)単位時間あたりの入力デバイス圧力値の低下量が規定しきい値[Thp]以上であることを検出した場合、
 (B)単位時間あたりの入力デバイス移動量が規定しきい値[Thd]未満であることを検出した場合、
 たとえ萩、上記(A),(B)のような状態を検出した場合に、予測軌跡をオフトする処理を行なう。
A specific example will be described with reference to FIG.
FIG. 19 shows two detection states as conditions for executing this (processing example 4).
For example, when one of the states (A) and (B) shown in FIG. 19 is detected, the on / off control of the predicted trajectory is executed according to this (processing example 4). That is, this is the case of the following situation detection.
(A) When it is detected that the amount of decrease in the input device pressure value per unit time is equal to or greater than the specified threshold value [Thp]
(B) When it is detected that the input device movement amount per unit time is less than the specified threshold value [Thd],
Even if a state like the above (A) and (B) is detected, processing for turning off the predicted trajectory is performed.
 上記(A)は、入力デバイス、例えば専用ペンが表示部から離れる動作を行っていることが予測される。このような状態で、予測軌跡を継続表示すると、ユーザの実軌跡が無いにも関わらず、予測軌跡のみが描画されるオーバーシュートが発生する可能性がある。このようなオーバーシュートを防止するため。このような場合は予測軌跡の描画を停止する。 (A) It is predicted that the input device, for example, a dedicated pen is moving away from the display unit. If the predicted trajectory is continuously displayed in such a state, an overshoot in which only the predicted trajectory is drawn may occur even though the user's actual trajectory is not present. To prevent such overshoot. In such a case, the drawing of the predicted trajectory is stopped.
 また、上記(B)は、入力デバイス、例えば専用ペンの動きが停止することが予想される。このような状態で、予測軌跡を継続表示すると、上記(A)と同様、ユーザの実軌跡が無いにも関わらず、予測軌跡のみが描画されるオーバーシュートが発生する可能性がある。このようなオーバーシュートを防止するため。このような場合は予測軌跡の描画を停止する。 In (B) above, it is expected that the movement of the input device, for example, the dedicated pen, stops. If the predicted trajectory is continuously displayed in such a state, an overshoot in which only the predicted trajectory is drawn may occur despite the absence of the user's actual trajectory, as in (A) above. To prevent such overshoot. In such a case, the drawing of the predicted trajectory is stopped.
 なお、入力デバイスは専用ペンに限定されるものではなく、例えば指である場合もある。 Note that the input device is not limited to a dedicated pen, and may be a finger, for example.
 図19の(A)に対応する具体的な圧力値の変化について図20を参照して説明する。 A specific change in pressure value corresponding to FIG. 19A will be described with reference to FIG.
 図20の上段のグラフは、入力デバイスの表示部に対する圧力値(P)の時間推移を示すグラフである。
 横軸が時間(t)であり、縦軸が入力デバイスの表示部に対する圧力値(P)を示している。なお、横軸の時間(t)は、表示フレームのフレーム番号に置き換えていもよい。
The upper graph in FIG. 20 is a graph showing the time transition of the pressure value (P) with respect to the display unit of the input device.
The horizontal axis represents time (t), and the vertical axis represents the pressure value (P) with respect to the display unit of the input device. The time (t) on the horizontal axis may be replaced with the frame number of the display frame.
 時間T1~T5の圧力値(P)の推移は以下の通りである。
 時間T1:圧力値=0.53
 時間T2:圧力値=0.54
 時間T3:圧力値=0.42
 時間T4:圧力値=0.30
 時間T5:圧力値=0.00
 このように時間経過とともに、次第に圧力値が減少している。これは、例えばペンが表示部から徐々に離れている状態であると推定される。
The transition of the pressure value (P) from time T1 to T5 is as follows.
Time T1: Pressure value = 0.53
Time T2: Pressure value = 0.54
Time T3: Pressure value = 0.42
Time T4: Pressure value = 0.30
Time T5: Pressure value = 0.00
Thus, the pressure value gradually decreases with time. For example, it is estimated that the pen is gradually away from the display unit.
 図20の下段のグラフは、上段の圧力値の時間推移でーたに基づいて作成されるグラフであり、単位時間当たりの圧力値の変化量である圧力値差分データの時間推移を示すグラフである。
 横軸が時間(t)であり、縦軸が入力デバイスの表示部に対する圧力値の単位時間あたりの差分値である圧力値差分(ΔP)を示している。
 例えば、時間T2に示す圧力値差分の値=+0.01は、時間T2の圧力値=0.54とその前の圧力計測時間である時間T1の圧力値=0.53との差分、すなわち、
 0.54-0.53=+0.01
 上記の差分値を示している。
 時間T3の差分値は時間T3の圧力値と時間T2の圧力値との差分、以下、同様である。
The lower graph in FIG. 20 is a graph created based on the time transition of the pressure value in the upper section, and is a graph showing the time transition of the pressure value difference data, which is the amount of change in the pressure value per unit time. is there.
The horizontal axis represents time (t), and the vertical axis represents the pressure value difference (ΔP), which is the difference value per unit time of the pressure value with respect to the display unit of the input device.
For example, the value of pressure value difference at time T2 = + 0.01 is the difference between the pressure value at time T2 = 0.54 and the pressure value at time T1, which is the previous pressure measurement time = 0.53.
0.54-0.53 = + 0.01
The above difference values are shown.
The difference value at time T3 is the difference between the pressure value at time T3 and the pressure value at time T2, and so on.
 時間T1~T5の圧力値差分(ΔP)の推移は以下の通りである。
 時間T1:圧力値差分=0.0
 時間T2:圧力値差分=+0.01
 時間T3:圧力値差分=-0.12
 時間T4:圧力値差分=-0.12
 時間T5:圧力値差分=-0.30
The transition of the pressure value difference (ΔP) between times T1 and T5 is as follows.
Time T1: Pressure value difference = 0.0
Time T2: Pressure value difference = + 0.01
Time T3: Pressure value difference = −0.12
Time T4: Pressure value difference = −0.12
Time T5: Pressure value difference = −0.30
 ここで、圧力値差分のしきい値[THp]を-0.09とする。
 THp=-0.09
 である。
 各時間において、計測された圧力値差分が上記しきい値(-0.09)より大きな低下量となったときに、予測軌跡の表示を停止(OFF)とする制御を行う。
 図20に示す例では、時間T3のタイミングの圧力値差分=-0.12が、しきい値(-0.09)より大きな低下量を示しているので、この時点で、予測軌跡の表示を停止する。
Here, the threshold value [THp] of the pressure value difference is set to −0.09.
THp = −0.09
It is.
At each time, control is performed to stop (OFF) the display of the predicted trajectory when the measured pressure value difference becomes a reduction amount larger than the threshold value (−0.09).
In the example shown in FIG. 20, since the pressure value difference at the timing of time T3 = −0.12 indicates a reduction amount larger than the threshold value (−0.09), the predicted trajectory is displayed at this point. Stop.
 このような予測軌跡の表示停止処理を行なうことで、ユーザの入力デバイスが表示部から離れた以降の予測軌跡を誤って表示してしまうオーバーシュートを防止することが可能となる。 It is possible to prevent an overshoot that erroneously displays a predicted trajectory after the user's input device leaves the display unit by performing such a predicted trajectory display stop process.
 次に、図19の(B)に示す入力デバイスの単位時間あたりの移動量に応じた処理について図21を参照して説明する。 Next, processing according to the movement amount of the input device per unit time shown in FIG. 19B will be described with reference to FIG.
 図21に示すグラフは、入力デバイスの表示部における単位時間あたりの移動距離(D)の時間推移を示すグラフである。
 横軸が時間(t)であり、縦軸が入力デバイスの表示単位時間あたりの移動距離(D)を示している。なお、横軸の時間(t)は、表示フレームのフレーム番号に置き換えていもよい。
 単位時間あたりの移動距離(D)は、例えば1表示フレーム間の移動距離である。
 例えば、時間T1に示す移動距離15は、その前のフレーム表示タイミングである時間(T0)から月のフレーム表示時間である時間(T1)までのフレーム間隔の間に入力デバイスの移動した距離に相当する。
The graph shown in FIG. 21 is a graph showing the time transition of the movement distance (D) per unit time in the display unit of the input device.
The horizontal axis represents time (t), and the vertical axis represents the movement distance (D) per display unit time of the input device. The time (t) on the horizontal axis may be replaced with the frame number of the display frame.
The movement distance (D) per unit time is, for example, a movement distance between one display frame.
For example, the moving distance 15 shown at time T1 corresponds to the distance moved by the input device during the frame interval from the time (T0) that is the previous frame display timing to the time (T1) that is the frame display time of the month. To do.
 時間T1~T5の単位時間移動距離(D)の推移は以下の通りである。
 時間T1:単位時間移動距離=15
 時間T2:単位時間移動距離=10
 時間T3:単位時間移動距離=3
 時間T4:単位時間移動距離=2
 時間T5:単位時間移動距離=1
 このように時間経過とともに、次第に単位時間移動距離が減少している。これは、例えばペンが表示部上で徐々に停止している状態であると推定される。
The transition of the unit time travel distance (D) from time T1 to T5 is as follows.
Time T1: Unit time travel distance = 15
Time T2: Unit time travel distance = 10
Time T3: Unit time travel distance = 3
Time T4: Unit time travel distance = 2
Time T5: Unit time travel distance = 1
Thus, the unit time moving distance gradually decreases with time. For example, it is estimated that the pen is gradually stopped on the display unit.
 ここで、単位時間移動距離のしきい値[THd]を4とする。
 THd=4
 である。
 各時間において、計測された単位時間移動距離が上記しきい値(4)以下となったときに、予測軌跡の表示を停止(OFF)とする制御を行う。
 図21に示す例では、時間T3のタイミングの単位時間移動距離=3が、しきい値(4)以下の値を示しているので、この時点で、予測軌跡の表示を停止する。
Here, the threshold value [THd] of the unit time moving distance is set to 4.
THd = 4
It is.
At each time, when the measured unit time moving distance is equal to or less than the threshold value (4), control is performed to stop (OFF) displaying the predicted trajectory.
In the example shown in FIG. 21, since the unit time moving distance = 3 at the timing of the time T3 indicates a value equal to or less than the threshold value (4), the display of the predicted trajectory is stopped at this point.
 このような予測軌跡の表示停止処理を行なうことで、ユーザの入力デバイスが表示部上で停止した以降の予測軌跡を誤って表示してしまうオーバーシュートを防止することが可能となる。 It is possible to prevent an overshoot that erroneously displays the predicted trajectory after the user input device has stopped on the display unit by performing such a predicted trajectory display stop process.
 (処理例5)
 次に処理例5について説明する。処理例5は、予測軌跡が実軌跡から外れた場合、予測軌跡の表示態様を変更して予測軌跡を目立たなくする制御を行う処理例である。
(Processing example 5)
Next, processing example 5 will be described. Process example 5 is a process example in which when the predicted trajectory deviates from the actual trajectory, control is performed to change the display mode of the predicted trajectory to make the predicted trajectory inconspicuous.
 この処理例5の具体例について、図22を参照して説明する。
 図22に示す(a)表示例は、通常の表示例である。描画済み軌跡100と予測軌跡105が表示されている。予測軌跡105は予測点1~3を連結する線として表示される。予測点1~3は、前述したように複数の類似軌跡から算出される複数の補助予測軌跡の構成座標の平均位置である。
 なお、図に示す実軌跡121は、表示部には表示されていない。
A specific example of the processing example 5 will be described with reference to FIG.
The display example (a) shown in FIG. 22 is a normal display example. A drawn trajectory 100 and a predicted trajectory 105 are displayed. The predicted trajectory 105 is displayed as a line connecting the predicted points 1 to 3. Predicted points 1 to 3 are average positions of constituent coordinates of a plurality of auxiliary predicted trajectories calculated from a plurality of similar trajectories as described above.
In addition, the real locus | trajectory 121 shown to a figure is not displayed on a display part.
 図22(b)は、本処理例5を適用した場合の表示例である。図22(b)では、予測軌跡105の表示領域が、表示態様変更領域171として設定され、予測軌跡105が目立たない表示態様に変更される。
 これは、予測軌跡の信頼度が低い場合に実行される。
 具体的には、例えば、先に説明した信頼度指標値の値を利用した処理を行なう。
FIG. 22B is a display example when the present processing example 5 is applied. In FIG. 22B, the display area of the predicted trajectory 105 is set as the display mode change area 171 and the predicted trajectory 105 is changed to a display mode that is not conspicuous.
This is executed when the reliability of the predicted trajectory is low.
Specifically, for example, the process using the reliability index value described above is performed.
 予測軌跡105の構成点として設定される予測点1,111~予測点3,113の各々について、信頼度指標値SD[u]が算出される。この信頼度指標値は、予測点の算出に適用した複数の補助予測軌跡(図11参照)の構成座標の標準偏差(σ)に対応する。 The reliability index value SD [u] is calculated for each of the prediction points 1,111 to 3,113 set as the constituent points of the prediction trajectory 105. This reliability index value corresponds to the standard deviation (σ) of the constituent coordinates of a plurality of auxiliary prediction trajectories (see FIG. 11) applied to the calculation of the prediction points.
 信頼度指標値の値が、所定値以上であり、信頼度が低いと判定された場合、図22(b)に示すように、予測軌跡105の表示領域を表示態様変更領域171として設定して、予測軌跡105が目立たない表示態様に変更する。 When it is determined that the reliability index value is equal to or greater than the predetermined value and the reliability is low, the display area of the predicted trajectory 105 is set as the display mode change area 171 as shown in FIG. The display mode is changed so that the predicted trajectory 105 is not conspicuous.
 このような表示制御を行うことで、実軌跡から離れた誤った予測軌跡をユーザ(描画者)に目立たなく提示することが可能となる。 By performing such display control, an erroneous predicted trajectory away from the actual trajectory can be presented inconspicuously to the user (drawer).
  [8.予測軌跡の表示制御の処理シーケンスについて]
 次に、予測軌跡の表示制御の処理シーケンスについて、図23以下のフローチャートを参照して説明する。
[8. Processing sequence for predictive trajectory display control]
Next, the processing sequence of the display control of the predicted trajectory will be described with reference to the flowcharts in FIG.
 図23に示すフローチャートは、表示制御処理の基本シーケンスであり、上述した(処理例1)~(処理例3)に従った表示制御を実行するシーケンスを説明するフローチャートである。
 図24に示すフローチャートは、上述した(処理例1)~(処理例3)に加え、入力デバイスの圧力値等に応じた予測軌跡の描画制御を実行する表示制御シーケンスを説明するフローチャートである。
 なお、これらのフローチャートに示す処理は、本開示の情報処理装置のデータ処理部、具体的には、例えばプログラム実行機能を有するCPU等からなるデータ処理部の制御の下で実行される。プログラムは、例えば情報処理装置のメモリに格納される。
The flowchart shown in FIG. 23 is a basic sequence of display control processing, and is a flowchart illustrating a sequence for executing display control according to the above-described (Processing Example 1) to (Processing Example 3).
The flowchart shown in FIG. 24 is a flowchart for explaining a display control sequence for executing the drawing control of the predicted trajectory according to the pressure value of the input device in addition to the above (processing example 1) to (processing example 3).
Note that the processes shown in these flowcharts are executed under the control of a data processing unit of the information processing apparatus according to the present disclosure, specifically, a data processing unit including a CPU having a program execution function, for example. The program is stored, for example, in the memory of the information processing apparatus.
 まず、図23に示すフローチャートを参照して上述した(処理例1)~(処理例3)に従った表示制御を実行するシーケンスについて説明する。 First, a sequence for executing display control according to the above (Processing Example 1) to (Processing Example 3) will be described with reference to the flowchart shown in FIG.
  (ステップS301)
 まず、ステップS301において、入力デバイスに基づく入力イベントを検出する。具体的はには例えば専用ペンの入力表示部に対するタッチ位置の検出処理である。
 この処理は、時間(またはフレーム)=tと、その時間に対応するペンの接触位置座標(x,y)情報の検出処理として行われる。
(Step S301)
First, in step S301, an input event based on an input device is detected. Specifically, for example, a touch position detection process for the input display unit of the dedicated pen is performed.
This process is performed as a process of detecting time (or frame) = t and pen contact position coordinate (x, y) information corresponding to the time.
  (ステップS302)
 情報処理装置は、ステップS301において入力されたイベント情報を適用して、表示部に対する軌跡を描画することになるが、前述したようにペンの実軌跡に対応する描画処理が間に合わない領域、すなわち、図1を参照して説明した描画遅延領域23が発生する。
(Step S302)
The information processing apparatus draws a trajectory for the display unit by applying the event information input in step S301, but as described above, an area where the drawing process corresponding to the actual trajectory of the pen is not in time, that is, The drawing delay area 23 described with reference to FIG. 1 is generated.
 この描画遅延領域に予測軌跡を描画するために、過去の描画済み軌跡の軌跡情報を用いた学習処理を行なう。
 すなわち、図5~図8等を参照して説明した処理に従って予測軌跡を推定するための学習処理を行なう。具体的には、図6に示すように、描画済み軌跡80の最新描画位置81を含む直前軌跡82と類似する類似軌跡を描画済み軌跡80から検出する処理等を実行する。
 ステップS302では、この類似軌跡検出等の学習処理を実行する。
In order to draw a predicted trajectory in the drawing delay area, a learning process using trajectory information of a past drawn trajectory is performed.
That is, a learning process for estimating a predicted trajectory is performed according to the process described with reference to FIGS. Specifically, as shown in FIG. 6, processing for detecting a similar locus similar to the immediately preceding locus 82 including the latest drawing position 81 of the drawn locus 80 from the drawn locus 80 is performed.
In step S302, learning processing such as detection of similar trajectories is executed.
  (ステップS303)
 ステップS303では、ステップS302において検出した類似領域を用いた予測軌跡の推定処理を実行する。
 なお、予測軌跡の推定処理の手順としては、複数(k個)の類似軌跡に応じて推定される未来フレームuに対応するk個の座標を算出し、これらk個の座標の平均位置をフレームuにおける予測軌跡の構成座標とする処理を行なう。
 予測する未来フレームをn点先までとした場合、予測軌跡を構成する座標と、各予測座標の信頼度指標値SDを個別に算出する。すなわち、(x,y0,sd)~(x,y,sd)を算出する。ここでは、予測未来フレーム番号に対応する変数をiとして、i=0~nのn+1個の座標位置と信頼度指標値を算出する設定とする。
(Step S303)
In step S303, a prediction trajectory estimation process using the similar region detected in step S302 is executed.
In addition, as a procedure of the estimation process of the predicted trajectory, k coordinates corresponding to the future frame u estimated according to a plurality (k) of similar trajectories are calculated, and an average position of these k coordinates is determined as a frame. The process of setting the coordinate of the predicted trajectory at u is performed.
When the predicted future frame is up to n points ahead, the coordinates constituting the predicted trajectory and the reliability index value SD of each predicted coordinate are calculated individually. That is, (x 0 , y 0, sd 0 ) to (x n , y n , sd n ) are calculated. Here, the variable corresponding to the predicted future frame number is set to i, and n + 1 coordinate positions of i = 0 to n and the reliability index value are calculated.
  (ステップS304)
 ステップS304~S308は、予測軌跡の構成座標に対応するi=0~nの各座標位置について、各座標位置の信頼度指標値に従った処理を繰り返し実行するループ処理となる。
 なお、信頼度指標値SDは、先に図12等を参照して説明したように、複数の類似軌跡に基づいて算出される各未来フレームの複数の推定座標の標準偏差σに相当する値である。
(Step S304)
Steps S304 to S308 are loop processing that repeatedly executes processing according to the reliability index value of each coordinate position for each coordinate position of i = 0 to n corresponding to the constituent coordinates of the predicted trajectory.
The reliability index value SD is a value corresponding to the standard deviation σ of a plurality of estimated coordinates of each future frame calculated based on a plurality of similar trajectories, as described above with reference to FIG. is there.
  (ステップS305)
 ステップS305において、予測軌跡の構成座標(x,y)の信頼度指標値SD[i]の値と、しきい値(α×s)との比較を実行する。
 なお、先に図18等を参照して説明したように、αは予め設定した係数であり、sは、図18に示すような直前軌跡161における1フレーム間の軌跡移動距離sである。
(Step S305)
In step S305, the reliability index value SD [i] of the constituent coordinates (x i , y i ) of the predicted trajectory is compared with the threshold value (α × s).
As described above with reference to FIG. 18 and the like, α is a preset coefficient, and s is a trajectory moving distance s between one frame in the immediately preceding trajectory 161 as shown in FIG.
 ステップS305では、
 SD[i]<α×s
 上記判定式が成立するか否かを判定する。
 上記判定式が成立する場合、その予測軌跡の構成座標(x,y)は、算出元となった複数の類似軌跡に従って算出された複数の予測座標位置の標準偏差σが小さい、すなわち、ばらつきが小さいことを意味する。この場合、その予測軌跡の構成座標(x,y)の信頼度は高いと判定し、ステップS306に進む。
 一方、上記判定式が成立しない場合、その予測軌跡の構成座標(x,y)は、算出元となった複数の類似軌跡に従って算出された複数の予測座標位置の標準偏差σが大きい、すなわち、ばらつきが大きいことを意味する。この場合、その予測軌跡の構成座標(x,y)の信頼度は低いと判定し、ステップS307に進む。
In step S305,
SD [i] <α × s
It is determined whether or not the determination formula is satisfied.
When the determination formula is satisfied, the constituent coordinates (x i , y i ) of the predicted trajectory have small standard deviations σ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. It means that the variation is small. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, and the process proceeds to step S306.
On the other hand, when the determination formula is not satisfied, the constituent coordinates (x i , y i ) of the predicted trajectory have large standard deviations σ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. That is, the variation is large. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, and the process proceeds to step S307.
  (ステップS306)
 ステップS305において、予測軌跡の構成座標(x,y)の信頼度が高いと判定された場合、ステップS306において、予測軌跡の描画処理を実行する。ここで、予測軌跡は、実軌跡対応の描画す眼未軌跡と異なる態様で描画する。
 すなわち、図13、図14を参照して説明したように、色、または透過率、または太さの少なくともいずれかを描画済みの実軌跡と異なる設定とした予測軌跡を描画して表示する。
 また、図15、図16を参照して説明したように、さらに、信頼度に応じて、色、または透過率、または太さの少なくともいずれかを変更して表示する構成としてもよい。
 なお、予測軌跡は新たなイベントの発生ごとに逐次、更新して表示する。
(Step S306)
If it is determined in step S305 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, a predicted trajectory drawing process is executed in step S306. Here, the predicted trajectory is drawn in a manner different from the eyeless trajectory to be drawn corresponding to the actual trajectory.
That is, as described with reference to FIGS. 13 and 14, a predicted trajectory having at least one of color, transmittance, and thickness set differently from the drawn actual trajectory is drawn and displayed.
Further, as described with reference to FIGS. 15 and 16, it may be configured to display by changing at least one of color, transmittance, and thickness according to the reliability.
The predicted trajectory is updated and displayed sequentially every time a new event occurs.
  (ステップS307)
 一方、ステップS305において、予測軌跡の構成座標(x,y)の信頼度が低いと判定された場合、ステップS307において、予測軌跡の描画処理を中止する。
 この場合、予測軌跡は描画されないことになる。
(Step S307)
On the other hand, if it is determined in step S305 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, the predicted trajectory drawing process is stopped in step S307.
In this case, the predicted trajectory is not drawn.
  (ステップS308)
 ステップS304~S308は、予測軌跡の構成座標に対応するi=0~nの各座標位置について、各座標位置の信頼度指標値に従った処理を繰り返し実行するループ処理となる。
 i=0~nの各座標位置についての処理が全て終了すると、ステップS3409に進む。
(Step S308)
Steps S304 to S308 are loop processing that repeatedly executes processing according to the reliability index value of each coordinate position for each coordinate position of i = 0 to n corresponding to the constituent coordinates of the predicted trajectory.
When all the processes for the coordinate positions i = 0 to n are completed, the process proceeds to step S3409.
  (ステップS309)
 ステップS309では、次のイベント入力の有無を判定する。
 次のイベント入力が検出されない場合は処理を終了する。
 次のイベント入力が検出された場合は、ステップS310に進む。
(Step S309)
In step S309, it is determined whether or not there is a next event input.
If the next event input is not detected, the process ends.
If the next event input is detected, the process proceeds to step S310.
  (ステップS310)
 ステップS310は、新規の入力イベントに対応する座標情報を取得する。
 この処理は、ステップS301と同様、時間(またはフレーム)=tと、その時間に対応するペンの接触位置座標(x,y)情報の検出処理として行われる。
 その後、新規入力イベント対応の座標情報に基づいて、ステップS302以下の処理を繰り返す。
(Step S310)
In step S310, coordinate information corresponding to a new input event is acquired.
Similar to step S301, this process is performed as a process of detecting time (or frame) = t and pen contact position coordinate (x, y) information corresponding to the time.
After that, based on the coordinate information corresponding to the new input event, the processing from step S302 is repeated.
 次に、図24に示すフローチャートを参照して、前述の(処理例1)~(処理例3)に加え、入力デバイスの圧力値等に応じた予測軌跡の描画制御を実行する表示制御シーケンスについて説明する。 Next, with reference to the flowchart shown in FIG. 24, in addition to the above (Processing Example 1) to (Processing Example 3), a display control sequence for executing drawing control of a predicted locus according to the pressure value of the input device and the like. explain.
  (ステップS401)
 まず、ステップS401において、入力デバイスに基づく入力イベントを検出する。具体的はには例えば専用ペンの入力表示部に対するタッチ位置の検出処理である。
 この処理は、時間(またはフレーム)=tと、その時間に対応するペンの接触位置座標(x,y)情報の検出処理、さらに、圧力値pの検出処理として行われる。
(Step S401)
First, in step S401, an input event based on an input device is detected. Specifically, for example, a touch position detection process for the input display unit of the dedicated pen is performed.
This process is performed as a process for detecting time (or frame) = t, the touch position coordinate (x, y) information of the pen corresponding to the time, and a process for detecting the pressure value p.
  (ステップS402)
 情報処理装置は、ステップS401において入力されたイベント情報を適用して、表示部に対する軌跡を描画することになるが、前述したようにペンの実軌跡に対応する描画処理が間に合わない領域、すなわち、図1を参照して説明した描画遅延領域23が発生する。
(Step S402)
The information processing apparatus draws a trajectory for the display unit by applying the event information input in step S401, but as described above, an area where the drawing process corresponding to the actual trajectory of the pen is not in time, that is, The drawing delay area 23 described with reference to FIG. 1 is generated.
 この描画遅延領域に予測軌跡を描画するために、過去の描画済み軌跡の軌跡情報を用いた学習処理を行なう。
 すなわち、図5~図8等を参照して説明した処理に従って予測軌跡を推定するための学習処理を行なう。具体的には、図6に示すように、描画済み軌跡80の最新描画位置81を含む直前軌跡82と類似する類似軌跡を描画済み軌跡80から検出する処理等を実行する。
 ステップS402では、この類似軌跡検出等の学習処理を実行する。
In order to draw a predicted trajectory in the drawing delay area, a learning process using trajectory information of a past drawn trajectory is performed.
That is, a learning process for estimating a predicted trajectory is performed according to the process described with reference to FIGS. Specifically, as shown in FIG. 6, processing for detecting a similar locus similar to the immediately preceding locus 82 including the latest drawing position 81 of the drawn locus 80 from the drawn locus 80 is performed.
In step S402, a learning process such as similar locus detection is executed.
  (ステップS403)
 ステップS403では、ステップS402において検出した類似領域を用いた予測軌跡の推定処理を実行する。
 なお、予測軌跡の推定処理の手順としては、複数(k個)の類似軌跡に応じて推定される未来フレームuに対応するk個の座標を算出し、これらk個の座標の平均位置をフレームuにおける予測軌跡の構成座標とする処理を行なう。
 予測する未来フレームをn点先までとした場合、予測軌跡を構成する座標と、各予測座標の信頼度指標値SDを個別に算出する。すなわち、(x,y0,sd)~(x,y,sd)を算出する。ここでは、予測未来フレーム番号に対応する変数をiとして、i=0~nのn+1個の座標位置と信頼度指標値を算出する設定とする。
(Step S403)
In step S403, a prediction trajectory estimation process using the similar region detected in step S402 is executed.
In addition, as a procedure of the estimation process of the predicted trajectory, k coordinates corresponding to the future frame u estimated according to a plurality (k) of similar trajectories are calculated, and an average position of these k coordinates is determined as a frame. The process of setting the coordinate of the predicted trajectory at u is performed.
When the predicted future frame is up to n points ahead, the coordinates constituting the predicted trajectory and the reliability index value SD of each predicted coordinate are calculated individually. That is, (x 0 , y 0, sd 0 ) to (x n , y n , sd n ) are calculated. Here, the variable corresponding to the predicted future frame number is set to i, and n + 1 coordinate positions of i = 0 to n and the reliability index value are calculated.
  (ステップS404)
 ステップS404~S408は、予測軌跡の構成座標に対応するi=0~nの各座標位置について、各座標位置の信頼度指標値に従った処理を繰り返し実行するループ処理となる。
 なお、信頼度指標値SDは、先に図12等を参照して説明したように、複数の類似軌跡に基づいて算出される各未来フレームの複数の推定座標の標準偏差σに相当する値である。
(Step S404)
Steps S404 to S408 are loop processing for repeatedly executing processing according to the reliability index value of each coordinate position for each coordinate position of i = 0 to n corresponding to the constituent coordinates of the predicted trajectory.
The reliability index value SD is a value corresponding to the standard deviation σ of a plurality of estimated coordinates of each future frame calculated based on a plurality of similar trajectories, as described above with reference to FIG. is there.
  (ステップS405)
 ステップS405において、予測軌跡の構成座標(x,y)の信頼度指標値SD[i]の値と、しきい値(α×s)との比較を実行する。
 なお、先に図18等を参照して説明したように、αは予め設定した係数であり、sは、図18に示すような直前軌跡161における1フレーム間の軌跡移動距離sである。
(Step S405)
In step S405, the value of the reliability index value SD [i] of the constituent coordinates (x i , y i ) of the predicted trajectory is compared with the threshold value (α × s).
As described above with reference to FIG. 18 and the like, α is a preset coefficient, and s is a trajectory moving distance s between one frame in the immediately preceding trajectory 161 as shown in FIG.
 ステップS405では、
 SD[i]<α×s
 上記判定式が成立するか否かを判定する。
 上記判定式が成立する場合、その予測軌跡の構成座標(x,y)は、算出元となった複数の類似軌跡に従って算出された複数の予測座標位置の標準偏差σが小さい、すなわち、ばらつきが小さいことを意味する。この場合、その予測軌跡の構成座標(x,y)の信頼度は高いと判定し、ステップS406に進む。
 一方、上記判定式が成立しない場合、その予測軌跡の構成座標(x,y)は、算出元となった複数の類似軌跡に従って算出された複数の予測座標位置の標準偏差σが大きい、すなわち、ばらつきが大きいことを意味する。この場合、その予測軌跡の構成座標(x,y)の信頼度は低いと判定し、ステップS407に進む。
In step S405,
SD [i] <α × s
It is determined whether or not the determination formula is satisfied.
When the determination formula is satisfied, the constituent coordinates (x i , y i ) of the predicted trajectory have small standard deviations σ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. It means that the variation is small. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, and the process proceeds to step S406.
On the other hand, when the determination formula is not satisfied, the constituent coordinates (x i , y i ) of the predicted trajectory have large standard deviations σ of a plurality of predicted coordinate positions calculated according to the plurality of similar trajectories from which the calculation is made. That is, the variation is large. In this case, it is determined that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, and the process proceeds to step S407.
  (ステップS406)
 ステップS405において、予測軌跡の構成座標(x,y)の信頼度が高いと判定された場合、ステップS406において、予測軌跡の描画処理を実行する。ここで、予測軌跡は、実軌跡対応の描画す眼未軌跡と異なる態様で描画する。
 すなわち、図13、図14を参照して説明したように、色、または透過率、または太さの少なくともいずれかを描画済みの実軌跡と異なる設定とした予測軌跡を描画して表示する。
 また、図15、図16を参照して説明したように、さらに、信頼度に応じて、色、または透過率、または太さの少なくともいずれかを変更して表示する構成としてもよい。
 なお、予測軌跡は新たなイベントの発生ごとに逐次、更新して表示する。
(Step S406)
If it is determined in step S405 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is high, a predicted trajectory drawing process is executed in step S406. Here, the predicted trajectory is drawn in a manner different from the eyeless trajectory to be drawn corresponding to the actual trajectory.
That is, as described with reference to FIGS. 13 and 14, a predicted trajectory having at least one of color, transmittance, and thickness set differently from the drawn actual trajectory is drawn and displayed.
Further, as described with reference to FIGS. 15 and 16, it may be configured to display by changing at least one of color, transmittance, and thickness according to the reliability.
The predicted trajectory is updated and displayed sequentially every time a new event occurs.
  (ステップS407)
 一方、ステップS405において、予測軌跡の構成座標(x,y)の信頼度が低いと判定された場合、ステップS407において、予測軌跡の描画処理を中止する。
 この場合、予測軌跡は描画されないことになる。
 あるいは、先に図23を参照して説明したように、予測軌跡を目立たないようにぼかし処理などの表示制御を実行する。
(Step S407)
On the other hand, if it is determined in step S405 that the reliability of the constituent coordinates (x i , y i ) of the predicted trajectory is low, the drawing process of the predicted trajectory is stopped in step S407.
In this case, the predicted trajectory is not drawn.
Alternatively, as described above with reference to FIG. 23, display control such as blurring processing is executed so that the predicted trajectory is not noticeable.
  (ステップS408)
 ステップS404~S408は、予測軌跡の構成座標に対応するi=0~nの各座標位置について、各座標位置の信頼度指標値に従った処理を繰り返し実行するループ処理となる。
 i=0~nの各座標位置についての処理が全て終了すると、ステップS3409に進む。
(Step S408)
Steps S404 to S408 are loop processing for repeatedly executing processing according to the reliability index value of each coordinate position for each coordinate position of i = 0 to n corresponding to the constituent coordinates of the predicted trajectory.
When all the processes for the coordinate positions i = 0 to n are completed, the process proceeds to step S3409.
  (ステップS409)
 ステップS409では、予測軌跡の描画処理を中止する条件を検出したか否かを判定する。
 これは、先に、例えば図19を参照して説明した検出状態(A),(B)の検出があったか否かの判定処理である。
 すなわち、以下の状態の検出である。
 (A)単位時間あたりの入力デバイス圧力値の低下量が規定しきい値[Thp]以上であることの検出。
 (B)単位時間あたりの入力デバイス移動量が規定しきい値[Thd]未満であることの検出。
(Step S409)
In step S409, it is determined whether a condition for stopping the drawing process of the predicted trajectory has been detected.
This is a process for determining whether or not the detection states (A) and (B) described above with reference to FIG. 19 have been detected.
That is, the following state is detected.
(A) Detection that the amount of decrease in the input device pressure value per unit time is equal to or greater than a specified threshold value [Thp].
(B) Detection that the input device movement amount per unit time is less than the specified threshold value [Thd].
 ステップS409では、例えば上記の(A),(B)のいずれかを検出したか否かを判定する。
 検出したと判定した場合は、ステップS409では、S410に進む。
 検出しなかった場合は、ステップS411に進む。
In step S409, for example, it is determined whether any of the above (A) and (B) is detected.
When it determines with having detected, it progresses to S410 in step S409.
If not detected, the process proceeds to step S411.
  (ステップS410)
 ステップS409において、上記の(A),(B)のいずれかの状況を検出したと判定した場合、ステップS410において、予測軌跡の描画を中止する。
 この処理は、先に図19~図21を参照して説明した処理に対応する。
 この処理の後、ステップS411に進む。
(Step S410)
If it is determined in step S409 that the situation (A) or (B) has been detected, the drawing of the predicted trajectory is stopped in step S410.
This process corresponds to the process described above with reference to FIGS.
After this process, the process proceeds to step S411.
  (ステップS411)
 ステップS411では、次のイベント入力の有無を判定する。
 次のイベント入力が検出されない場合は処理を終了する。
 次のイベント入力が検出された場合は、ステップS412に進む。
(Step S411)
In step S411, it is determined whether or not there is a next event input.
If the next event input is not detected, the process ends.
If the next event input is detected, the process proceeds to step S412.
  (ステップS412)
 ステップS412は、新規の入力イベントに対応する座標情報を取得する。
 この処理は、ステップS401と同様、時間(またはフレーム)=tと、その時間に対応するペンの接触位置座標(x,y)情報、圧力値pの検出処理として行われる。
 その後、新規入力イベント対応の座標情報に基づいて、ステップS402以下の処理を繰り返す。
(Step S412)
In step S412, coordinate information corresponding to a new input event is acquired.
Similar to step S401, this processing is performed as detection processing of time (or frame) = t, pen contact position coordinate (x, y) information corresponding to the time, and pressure value p.
Then, based on the coordinate information corresponding to the new input event, the processing from step S402 is repeated.
  [9.高精度な予測軌跡の表示による効果の一例について]
 上述した本開示の処理を適用することで、専用ペン等の入力デバイスの軌跡を高精度に表示することが可能となる。
  このような処理を実行する効果の一例について、図25を参照して説明する。
[9. An example of the effect of displaying a highly accurate predicted trajectory]
By applying the processing of the present disclosure described above, it is possible to display the locus of an input device such as a dedicated pen with high accuracy.
An example of the effect of executing such processing will be described with reference to FIG.
 例えば、図25(a)に示すように、ステップS1,S2に従って何らかの文字を描く処理を行なう場合を考える。
 ステップS1において、ます、水平ラインL1を書く。
 その後、ステップS2において、縦方向のラインL2を書く。
 ラインL2は、ほぼラインL1の中心位置を通るラインとして書く必要があるとする。
For example, as shown in FIG. 25A, consider a case where a process of drawing some characters is performed according to steps S1 and S2.
In step S1, the horizontal line L1 is written.
Thereafter, in step S2, a vertical line L2 is written.
It is assumed that the line L2 needs to be written as a line that substantially passes through the center position of the line L1.
 しかし、例えば、図25(b)に示すように、軌跡描画が遅延してしまうと、図に示す(P1)のように、ユーザが水平ラインL1を書いたのち、所定期間、水平ラインL1の末端の線分が非表示となってしまうことになる。 However, for example, as shown in FIG. 25B, when the trace drawing is delayed, the user writes the horizontal line L1 as shown in FIG. The line segment at the end will be hidden.
 このような状態で、ユーザが縦方向のラインL2を書き始めようとしても、水平ラインL1の中心位置を正確に把握することができない。したがって、図25(b)の(P2)に示すように、L2の書き出し位置を決定できないといった問題が発生する。 In this state, even if the user tries to start writing the vertical line L2, the center position of the horizontal line L1 cannot be accurately grasped. Therefore, as shown in (P2) of FIG. 25B, there arises a problem that the write start position of L2 cannot be determined.
 本開示の処理を適用した高精度な予測軌跡の描画を実行することで、このような問題が解決し、ユーザは処理をスムーズに行なうことが可能となる。 Execute the drawing of the predicted trajectory with high accuracy by applying the processing of the present disclosure to solve such a problem, and the user can perform the processing smoothly.
  [10.本開示の情報処理装置のハードウェア構成例について]
 次に、図26を参照して、本開示の情報処理装置のハードウェア構成例について説明する。
[10. Example of hardware configuration of information processing apparatus of present disclosure]
Next, a hardware configuration example of the information processing apparatus according to the present disclosure will be described with reference to FIG.
 図26に示すように、情報処理装置は、入力部301、出力部(表示部)302、センサ303、制御部(CPU等)304、メモリ(RAM)305、メモリ(不揮発性メモリ)306を有する。 As illustrated in FIG. 26, the information processing apparatus includes an input unit 301, an output unit (display unit) 302, a sensor 303, a control unit (CPU or the like) 304, a memory (RAM) 305, and a memory (nonvolatile memory) 306. .
 入力部301は、例えば、タッチパネル機能を持つ表示部を兼ねた入力部として構成される。ただし、タッチパネル機能は必須ではなく、その他の入力デバイスの動きの検出情報を入力する構成を有するものであればよい。 The input unit 301 is configured as an input unit that also serves as a display unit having a touch panel function, for example. However, the touch panel function is not indispensable, and any touch panel function may be used as long as it has a configuration for inputting motion detection information of other input devices.
 なお、入力デバイスとしては、専用ペン、さらに、ユーザの指を入力デバイスとして利用することも可能である。その他、ジャイロポインタなどのデバイスの動き情報を入力する構成としてもよい。さらに、入力部301をカメラとして、ユーザの動き(ジェスチャ等)を検出して入力情報とするといった構成としてもよい。
 また、一般的なPCに備えられたマウスを入力デバイスとして設定してもよい。
 なお、入力部301は、入力デバイスの動き情報を入力する機能に限らず、表示部301の輝度調整、モード設定等の様々な設定を行う入力部等から構成される。
In addition, as an input device, it is also possible to use a dedicated pen and a user's finger as an input device. In addition, it is good also as a structure which inputs the motion information of devices, such as a gyro pointer. Further, the input unit 301 may be used as a camera to detect user movement (such as a gesture) and use it as input information.
In addition, a mouse provided in a general PC may be set as an input device.
Note that the input unit 301 is not limited to the function of inputting the movement information of the input device, and includes an input unit that performs various settings such as brightness adjustment and mode setting of the display unit 301.
 出力部302は、入力部301の検出した入力デバイスの移動情報に応じた軌跡情報を表示する表示部等によって構成される。
 例えばタッチパネル型のディスプレイ等である。
The output unit 302 includes a display unit that displays trajectory information corresponding to movement information of the input device detected by the input unit 301.
For example, a touch panel display.
 センサ303は、例えばタッチぺんの圧力を検出するセンサ、あるいは指の押圧面積を検出するセンサ等、本開示の処理に適用するための情報を入力するセンサである。 The sensor 303 is a sensor that inputs information to be applied to the processing of the present disclosure, such as a sensor that detects the pressure on the touch pen or a sensor that detects the pressing area of the finger.
 制御部304派例えば電子回路によって構成されるCPU等からなり、例えば前述した実施例において説明したフローチャートに従った処理を実行するデータ処理部として機能する。 The control unit 304 includes, for example, a CPU configured by an electronic circuit, and functions as a data processing unit that executes processing according to the flowchart described in the above-described embodiment, for example.
 メモリ305は、例えばRAMであり、実施例において説明したフローチャートに従った処理を実行するためのワークエリア、さらにユーザの利用する入力デバイスの位置情報、データ処理に適用する各種パラメータ等の記憶領域として利用される。 The memory 305 is, for example, a RAM, and serves as a work area for executing processing according to the flowchart described in the embodiment, as well as a storage area for input device location information used by the user, various parameters applied to data processing, and the like. Used.
 メモリ306は、不揮発性メモリであり、例えば前述した実施例において説明したフローチャートに従った処理を実行させるためのプログラム、さらに、ユーザの描画軌跡等を格納する記憶領域として利用される。 The memory 306 is a non-volatile memory, and is used as a storage area for storing, for example, a program for executing processing according to the flowchart described in the above-described embodiment, and a user's drawing trajectory.
 なお、上述した実施例では、予測軌跡算出のための学習処理として、k近傍(kNN:k Nearest Neighbors)法を適用した処理例について説明したが、本開示の予測軌跡推定のための学習処理は、kNNに限らず、その他の手法を適用してもよい。例えば、以下のような手法が適用できる。
 Linear Regression法、
 Support Vector Regression (SVR)法、
 Relevant Vector Regression(RVR)法、
 Hidden Markov Model(HMM)法、
 さらに、例えば本出願人の先の特許出願である特開2011-118786に記載の予測技術を適用してもよい。
In the above-described embodiment, a processing example in which the k-nearest neighbors (kNN: kNearest Neighbors) method is applied as the learning process for calculating the predicted trajectory has been described. In addition to kNN, other methods may be applied. For example, the following method can be applied.
Linear regression method,
Support Vector Regression (SVR) method,
Relevant Vector Regression (RVR) method,
Hidden Markov Model (HMM) method,
Further, for example, a prediction technique described in Japanese Patent Application Laid-Open No. 2011-118786, which is a previous patent application of the present applicant, may be applied.
 また、本開示の情報処理装置は、軌跡表示を行う表示装置と一体化した構成としてもよいが、表示装置と通信可能な別の構成としてもよい。また、例えば、ネットワークを介してデータ通信可能なサーバとしてもよい。この場合、ユーザの操作する入力デバイスの入力情報をサーバに送信し、サーバにおいて上述した学習処理に基づく予測軌跡の算出を実行する。さらに、サーバは算出結果をユーザ側の表示装置、例えタブレット端末等に送信し、タブレット端末の表示部に予測軌跡を表示するといった処理を実行する。 In addition, the information processing apparatus according to the present disclosure may be configured to be integrated with a display device that performs trajectory display, but may be configured to be able to communicate with the display device. For example, it is good also as a server which can communicate data via a network. In this case, the input information of the input device operated by the user is transmitted to the server, and the server calculates the predicted trajectory based on the learning process described above. Further, the server executes a process of transmitting the calculation result to a display device on the user side, for example, a tablet terminal, and displaying a predicted locus on the display unit of the tablet terminal.
  [11.本開示の構成のまとめ]
 以上、特定の実施例を参照しながら、本開示の実施例について詳解してきた。しかしながら、本開示の要旨を逸脱しない範囲で当業者が実施例の修正や代用を成し得ることは自明である。すなわち、例示という形態で本発明を開示してきたのであり、限定的に解釈されるべきではない。本開示の要旨を判断するためには、特許請求の範囲の欄を参酌すべきである。
[11. Summary of composition of the present disclosure]
As described above, the embodiments of the present disclosure have been described in detail with reference to specific embodiments. However, it is obvious that those skilled in the art can make modifications and substitutions of the embodiments without departing from the gist of the present disclosure. In other words, the present invention has been disclosed in the form of exemplification, and should not be interpreted in a limited manner. In order to determine the gist of the present disclosure, the claims should be taken into consideration.
 なお、本明細書において開示した技術は、以下のような構成をとることができる。
 (1) ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行なうデータ処理部を有し、
 前記データ処理部は、
 前記入力位置情報に基づいて特定される実軌跡と、
 前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
 を異なる態様で表示させる表示制御を実行する情報処理装置。
The technology disclosed in this specification can take the following configurations.
(1) having a data processing unit that performs display control processing of a locus according to input position information generated by user operation input;
The data processing unit
An actual trajectory identified based on the input position information;
A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
An information processing apparatus that executes display control for displaying the image in a different manner.
 (2)前記予測軌跡は過去に特定された実軌跡に基づいて予測される軌跡である前記(1)に記載の情報処理装置。 (2) The information processing apparatus according to (1), wherein the predicted trajectory is a trajectory predicted based on an actual trajectory specified in the past.
 (3)前記データ処理部は、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを、前記実軌跡と異なる態様で表示させる前記(1)または(2)に記載の情報処理装置。 (3) The information processing according to (1) or (2), wherein the data processing unit displays at least one of a color, a transparency, and a thickness of the predicted trajectory in a mode different from the actual trajectory. apparatus.
 (4)前記データ処理部は、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを、前記実軌跡と異なる態様とし、予測軌跡を実軌跡より目立たない態様で表示させる前記(1)~(3)いずれかに記載の情報処理装置。 (4) The data processing unit displays at least one of the color, the transparency, or the thickness of the predicted locus different from the actual locus, and displays the predicted locus in a less conspicuous manner than the actual locus. The information processing apparatus according to any one of 1) to (3).
 (5)前記データ処理部は、前記予測軌跡の信頼度に応じて、前記予測軌跡の表示態様を決定する前記(1)~(4)いずれかに記載の情報処理装置。 (5) The information processing apparatus according to any one of (1) to (4), wherein the data processing unit determines a display mode of the predicted trajectory according to a reliability of the predicted trajectory.
 (6)前記データ処理部は、前記予測軌跡の信頼度に応じて、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを決定して表示させる前記(5)に記載の情報処理装置。 (6) The information according to (5), wherein the data processing unit determines and displays at least one of a color, a transparency, or a thickness of the predicted trajectory according to the reliability of the predicted trajectory. Processing equipment.
 (7)前記データ処理部は、前記予測軌跡の信頼度に応じて、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを変更し、信頼度が低下するに従い、予測軌跡をより目立たない態様として表示させる前記(5)に記載の情報処理装置。 (7) The data processing unit changes at least one of the color, transparency, or thickness of the predicted trajectory according to the reliability of the predicted trajectory, and the predicted trajectory is reduced as the reliability decreases. The information processing apparatus according to (5), wherein the information processing apparatus is displayed as a less conspicuous aspect.
 (8)前記データ処理部は、前記予測軌跡の信頼度が規定しきい値より低いと判定した場合、予測軌跡が表示されないように表示部を制御する前記(5)に記載の情報処理装置。 (8) The information processing apparatus according to (5), wherein the data processing unit controls the display unit so that the predicted trajectory is not displayed when the reliability of the predicted trajectory is determined to be lower than a predetermined threshold.
 (9)前記データ処理部は、前記予測軌跡の信頼度が規定しきい値より低いと判定した場合、予測軌跡の表示を、目立たない表示態様に変更する前記(5)に記載の情報処理装置。 (9) The information processing apparatus according to (5), wherein when the reliability of the predicted trajectory is determined to be lower than a predetermined threshold, the data processing unit changes the display of the predicted trajectory to an inconspicuous display mode. .
 (10)前記入力位置情報は、タッチパネルに対する入力オブジェクトの接触位置検出により得られる入力位置情報であり、前記データ処理部は、前記入力オブジェクトが前記タッチパネル接する圧力を表す圧力値を取得し、前記圧力値に応じて前記予測軌跡の表示態様を決定する前記(1)~(9)いずれかに記載の情報処理装置。 (10) The input position information is input position information obtained by detecting a contact position of the input object with respect to the touch panel, and the data processing unit acquires a pressure value representing a pressure at which the input object contacts the touch panel, and the pressure The information processing apparatus according to any one of (1) to (9), wherein a display mode of the predicted trajectory is determined according to a value.
 (11)前記データ処理部は、前記圧力値の低下が規定しきい値より大きな低下量となったことを検出した場合、前記予測軌跡が表示されないように表示部を制御する前記(10)に記載の情報処理装置。 (11) When the data processing unit detects that the decrease in the pressure value is larger than a specified threshold, the data processing unit controls the display unit so that the predicted locus is not displayed. The information processing apparatus described.
 (12)前記データ処理部は、前記入力位置情報の単位時間あたりの移動量を取得し、前記移動量に応じて前記予測軌跡の表示態様を決定する前記(1)~(11)いずれかに記載の情報処理装置。 (12) The data processing unit acquires a movement amount per unit time of the input position information, and determines a display mode of the predicted trajectory according to the movement amount according to any one of (1) to (11). The information processing apparatus described.
 (13)前記データ処理部は、前記移動量が規定しきい値より未満となったことを検出した場合、前記予測軌跡の表示を停止する前記(12)に記載の情報処理装置。 (13) The information processing apparatus according to (12), wherein the data processing unit stops displaying the predicted trajectory when detecting that the movement amount is less than a predetermined threshold.
 前記入力オブジェクトはスタイラスである前記(10)に記載の情報処理装置。 The information processing apparatus according to (10), wherein the input object is a stylus.
 (15)前記データ処理部は、前記予測軌跡算出処理として、予測軌跡の算出領域の直前軌跡と類似する類似軌跡を過去の軌跡上から複数検出し、検出した複数の類似軌跡に基づいて各類似軌跡の後続軌跡を推定し、推定した複数の後続軌跡の平均化処理または重み込み付け加算処理により、予測軌跡を算出する前記(1)~(14)いずれかに記載の情報処理装置。 (15) The data processing unit detects, as the predicted trajectory calculation process, a plurality of similar trajectories similar to the trajectory immediately before the calculation area of the predicted trajectory from the past trajectory, and performs each similarity based on the detected plurality of similar trajectories. The information processing apparatus according to any one of (1) to (14), wherein a subsequent trajectory of the trajectory is estimated, and a predicted trajectory is calculated by an averaging process or a weighted addition process of the estimated plurality of subsequent trajectories.
 (16) 情報処理装置において実行する情報処理方法であり、
 データ処理部が、ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行ない、
 前記データ処理部は、前記表示制御処理において、
 前記入力位置情報に基づいて特定される実軌跡と、
 前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
 を異なる態様で表示させる表示制御を実行する情報処理方法。
(16) An information processing method executed in the information processing apparatus,
The data processing unit performs a trajectory display control process according to the input position information generated by the user's operation input,
The data processing unit, in the display control process,
An actual trajectory identified based on the input position information;
A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
An information processing method for executing display control for displaying in a different manner.
 (17) 情報処理装置において情報処理を実行させるプログラムであり、
 データ処理部に、ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行なわせ、
 前記表示制御処理において、
 前記入力位置情報に基づいて特定される実軌跡と、
 前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
 を異なる態様で表示させる表示制御を実行させるプログラム。
(17) A program for executing information processing in an information processing device,
Causing the data processing unit to perform display control processing of the locus according to the input position information generated by the user's operation input;
In the display control process,
An actual trajectory identified based on the input position information;
A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
A program for executing display control for displaying the image in a different manner.
 また、明細書中において説明した一連の処理はハードウェア、またはソフトウェア、あるいは両者の複合構成によって実行することが可能である。ソフトウェアによる処理を実行する場合は、処理シーケンスを記録したプログラムを、専用のハードウェアに組み込まれたコンピュータ内のメモリにインストールして実行させるか、あるいは、各種処理が実行可能な汎用コンピュータにプログラムをインストールして実行させることが可能である。例えば、プログラムは記録媒体に予め記録しておくことができる。記録媒体からコンピュータにインストールする他、LAN(Local Area Network)、インターネットといったネットワークを介してプログラムを受信し、内蔵するハードディスク等の記録媒体にインストールすることができる。 Further, the series of processes described in the specification can be executed by hardware, software, or a combined configuration of both. When executing processing by software, the program recording the processing sequence is installed in a memory in a computer incorporated in dedicated hardware and executed, or the program is executed on a general-purpose computer capable of executing various processing. It can be installed and run. For example, the program can be recorded in advance on a recording medium. In addition to being installed on a computer from a recording medium, the program can be received via a network such as a LAN (Local Area Network) or the Internet and installed on a recording medium such as a built-in hard disk.
 なお、明細書に記載された各種の処理は、記載に従って時系列に実行されるのみならず、処理を実行する装置の処理能力あるいは必要に応じて並列的にあるいは個別に実行されてもよい。また、本明細書においてシステムとは、複数の装置の論理的集合構成であり、各構成の装置が同一筐体内にあるものには限らない。 In addition, the various processes described in the specification are not only executed in time series according to the description, but may be executed in parallel or individually according to the processing capability of the apparatus that executes the processes or as necessary. Further, in this specification, the system is a logical set configuration of a plurality of devices, and the devices of each configuration are not limited to being in the same casing.
 以上、説明したように、本開示の一実施例の構成によれば、過去の軌跡情報に従って予測した予測軌跡の表示制御が実現される。
 具体的には、入力位置情報に従った軌跡の表示制御処理を行なうデータ処理部を有し、データ処理部は、入力位置情報に従った軌跡算出処理が完了した軌跡を実軌跡として表示し、入力位置情報に従った軌跡算出処理が完了していない領域の軌跡を予測軌跡として推定し、推定した予測軌跡を、表示済みの実軌跡と異なる態様で表示する。例えば、予測軌跡の色、または透過度、または太さの少なくともいずれかを、実軌跡と異なる態様で表示し、予測軌跡を実軌跡より目立たない態様で表示する。また信頼度に応じて予測軌跡を非表示にする処理を行なう。
 本構成により、過去の軌跡情報に従って予測した予測軌跡の表示制御が実現される。
As described above, according to the configuration of an embodiment of the present disclosure, display control of a predicted trajectory predicted according to past trajectory information is realized.
Specifically, it has a data processing unit that performs a display control process of the trajectory according to the input position information, and the data processing unit displays the trajectory after the trajectory calculation process according to the input position information is completed as an actual trajectory, A trajectory in an area where the trajectory calculation process according to the input position information is not completed is estimated as a predicted trajectory, and the estimated predicted trajectory is displayed in a manner different from the displayed actual trajectory. For example, at least one of the color, transparency, and thickness of the predicted trajectory is displayed in a manner different from the actual trajectory, and the predicted trajectory is displayed in a manner that is less conspicuous than the actual trajectory. Further, a process for hiding the predicted trajectory is performed according to the reliability.
With this configuration, display control of a predicted trajectory predicted according to past trajectory information is realized.
  10 入力表示装置
  11 入力デバイス
  15 ペン先位置
  21 描画済み軌跡
  22 最新描画位置
  23 描画遅延領域
  31 描画済み軌跡
  32 最新描画位置
  33 近似適用軌跡
  34 ペン軌跡
  35 予測軌跡
  41 描画済み軌跡
  42 最新描画位置
  43 近似適用軌跡
  51 描画済み軌跡
  52 最新描画位置
  53 予測軌跡
  54 ペン位置
  71 描画済み記関
  72 最新描画位置
  73 予測軌跡
  80 描画済み軌跡
  81 最新描画位置
  82 直前軌跡
  83 類似軌跡
  84 類似軌跡の後続軌跡
  85 補助予測軌跡
  86 最終決定予測軌跡
  90 描画済み軌跡
  91 直前軌跡
  92 補助予測軌跡
  93 最終決定予測軌跡
 100 描画済み軌跡
 101 直前軌跡
 102 最新描画位置
 103 補助予測軌跡
 104 補助予測軌跡の標準偏差
 105 予測軌跡
 111~113 予測点
 121 実軌跡
 151 非表示予測軌跡
 171 表示態様変更領域
 301 入力部
 302 出力部
 303 センサ
 304 制御部
 305 メモリ(RAM)
 306 メモリ(不揮発性メモリ)
DESCRIPTION OF SYMBOLS 10 Input display apparatus 11 Input device 15 Pen tip position 21 Drawn locus | trajectory 22 Latest drawing position 23 Drawing delay area 31 Drawn locus 32 Latest drawing position 33 Approximation application locus 34 Pen locus 35 Predicted locus 41 Drawn locus 42 Latest drawing position 43 Approximate application locus 51 Drawn locus 52 Latest drawing position 53 Predicted locus 54 Pen position 71 Drawn entry 72 Latest drawing position 73 Predicted locus 80 Drawn locus 81 Latest drawing position 82 Immediate locus 83 Similar locus 84 Subsequent locus of similar locus 85 Auxiliary predicted trajectory 86 Final determined predicted trajectory 90 Drawn trajectory 91 Previous trajectory 92 Auxiliary predicted trajectory 93 Final determined predicted trajectory 100 Drawn trajectory 101 Immediate trajectory 102 Latest drawing position 103 Auxiliary predicted trajectory 104 Standard deviation of auxiliary predicted trajectory 105 Prediction Measurement locus 111 to 113 Prediction point 121 Actual locus 151 Non-display prediction locus 171 Display mode change area 301 Input unit 302 Output unit 303 Sensor 304 Control unit 305 Memory (RAM)
306 memory (non-volatile memory)

Claims (17)

  1.  ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行なうデータ処理部を有し、
     前記データ処理部は、
     前記入力位置情報に基づいて特定される実軌跡と、
     前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
     を異なる態様で表示させる表示制御を実行する情報処理装置。
    A data processing unit that performs display control processing of a locus according to input position information generated by user operation input;
    The data processing unit
    An actual trajectory identified based on the input position information;
    A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
    An information processing apparatus that executes display control for displaying the image in a different manner.
  2.  前記予測軌跡は過去に特定された実軌跡に基づいて予測される軌跡である請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the predicted trajectory is a trajectory predicted based on an actual trajectory specified in the past.
  3.  前記データ処理部は、
     前記予測軌跡の色、または透過度、または太さの少なくともいずれかを、前記実軌跡と異なる態様で表示させる請求項1に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 1, wherein at least one of a color, a transparency, and a thickness of the predicted trajectory is displayed in a mode different from the actual trajectory.
  4.  前記データ処理部は、
     前記予測軌跡の色、または透過度、または太さの少なくともいずれかを、前記実軌跡と異なる態様とし、予測軌跡を実軌跡より目立たない態様で表示させる請求項1に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 1, wherein at least one of a color, transparency, or thickness of the predicted trajectory is different from the actual trajectory, and the predicted trajectory is displayed in a less conspicuous manner than the actual trajectory.
  5.  前記データ処理部は、
     前記予測軌跡の信頼度に応じて、前記予測軌跡の表示態様を決定する請求項1に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 1, wherein a display mode of the predicted trajectory is determined according to the reliability of the predicted trajectory.
  6.  前記データ処理部は、
     前記予測軌跡の信頼度に応じて、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを決定して表示させる請求項5に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 5, wherein at least one of a color, a transparency, and a thickness of the predicted trajectory is determined and displayed according to the reliability of the predicted trajectory.
  7.  前記データ処理部は、
     前記予測軌跡の信頼度に応じて、前記予測軌跡の色、または透過度、または太さの少なくともいずれかを変更し、信頼度が低下するに従い、予測軌跡をより目立たない態様として表示させる請求項5に記載の情報処理装置。
    The data processing unit
    The color, transparency, or thickness of the predicted trajectory is changed according to the reliability of the predicted trajectory, and the predicted trajectory is displayed in a less conspicuous manner as the reliability decreases. 5. The information processing apparatus according to 5.
  8.  前記データ処理部は、
     前記予測軌跡の信頼度が規定しきい値より低いと判定した場合、予測軌跡が表示されないように表示部を制御する請求項5に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 5, wherein when the reliability of the predicted trajectory is determined to be lower than a predetermined threshold, the information processing apparatus controls the display unit so that the predicted trajectory is not displayed.
  9.  前記データ処理部は、
     前記予測軌跡の信頼度が規定しきい値より低いと判定した場合、予測軌跡の表示を、目立たない表示態様に変更する請求項5に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 5, wherein when it is determined that the reliability of the predicted trajectory is lower than a predetermined threshold, the display of the predicted trajectory is changed to an inconspicuous display mode.
  10.  前記入力位置情報は、タッチパネルに対する入力オブジェクトの接触位置検出により得られる入力位置情報であり、
     前記データ処理部は、
     前記入力オブジェクトが前記タッチパネル接する圧力を表す圧力値を取得し、
     前記圧力値に応じて前記予測軌跡の表示態様を決定する請求項1に記載の情報処理装置。
    The input position information is input position information obtained by detecting the contact position of the input object with respect to the touch panel,
    The data processing unit
    Obtaining a pressure value representing the pressure at which the input object touches the touch panel;
    The information processing apparatus according to claim 1, wherein a display mode of the predicted trajectory is determined according to the pressure value.
  11.  前記データ処理部は、
     前記圧力値の低下が規定しきい値より大きな低下量となったことを検出した場合、前記予測軌跡が表示されないように表示部を制御する請求項10に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 10, wherein when the decrease in the pressure value is detected as a decrease amount greater than a specified threshold, the display unit is controlled so that the predicted locus is not displayed.
  12.  前記データ処理部は、
     前記入力位置情報の単位時間あたりの移動量を取得し、
     前記移動量に応じて前記予測軌跡の表示態様を決定する請求項1に記載の情報処理装置。
    The data processing unit
    Obtain the amount of movement per unit time of the input position information,
    The information processing apparatus according to claim 1, wherein a display mode of the predicted trajectory is determined according to the movement amount.
  13.  前記データ処理部は、
     前記移動量が規定しきい値より未満となったことを検出した場合、前記予測軌跡の表示を停止する請求項12に記載の情報処理装置。
    The data processing unit
    The information processing apparatus according to claim 12, wherein when it is detected that the movement amount is less than a predetermined threshold value, the display of the predicted trajectory is stopped.
  14.  前記入力オブジェクトはスタイラスである請求項10に記載の情報処理装置。 The information processing apparatus according to claim 10, wherein the input object is a stylus.
  15.  前記データ処理部は、
     前記予測軌跡算出処理として、
     予測軌跡の算出領域の直前軌跡と類似する類似軌跡を過去の軌跡上から複数検出し、
     検出した複数の類似軌跡に基づいて各類似軌跡の後続軌跡を推定し、
     推定した複数の後続軌跡の平均化処理または重み込み付け加算処理により、予測軌跡を算出する請求項1に記載の情報処理装置。
    The data processing unit
    As the predicted trajectory calculation process,
    A plurality of similar trajectories similar to the previous trajectory of the predicted trajectory calculation area are detected from past trajectories,
    Estimating the subsequent trajectory of each similar trajectory based on the detected multiple similar trajectories,
    The information processing apparatus according to claim 1, wherein a predicted trajectory is calculated by an averaging process or a weighted addition process of a plurality of estimated subsequent trajectories.
  16.  情報処理装置において実行する情報処理方法であり、
     データ処理部が、ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行ない、
     前記データ処理部は、前記表示制御処理において、
     前記入力位置情報に基づいて特定される実軌跡と、
     前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
     を異なる態様で表示させる表示制御を実行する情報処理方法。
    An information processing method executed in an information processing apparatus,
    The data processing unit performs a trajectory display control process according to the input position information generated by the user's operation input,
    The data processing unit, in the display control process,
    An actual trajectory identified based on the input position information;
    A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
    An information processing method for executing display control for displaying in a different manner.
  17.  情報処理装置において情報処理を実行させるプログラムであり、
     データ処理部に、ユーザの操作入力により生成される入力位置情報に従った軌跡の表示制御処理を行なわせ、
     前記表示制御処理において、
     前記入力位置情報に基づいて特定される実軌跡と、
     前記実軌跡の特定が完了していない領域の軌跡であり、所定の予測処理により特定した予測軌跡と、
     を異なる態様で表示させる表示制御を実行させるプログラム。
    A program for executing information processing in an information processing apparatus;
    Causing the data processing unit to perform display control processing of the locus according to the input position information generated by the user's operation input;
    In the display control process,
    An actual trajectory identified based on the input position information;
    A trajectory of an area where the identification of the real trajectory is not completed, and a predicted trajectory identified by a predetermined prediction process;
    A program for executing display control for displaying the image in a different manner.
PCT/JP2014/072227 2013-10-02 2014-08-26 Information processing device, information processing method, and program WO2015049934A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-206927 2013-10-02
JP2013206927 2013-10-02

Publications (1)

Publication Number Publication Date
WO2015049934A1 true WO2015049934A1 (en) 2015-04-09

Family

ID=52778530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/072227 WO2015049934A1 (en) 2013-10-02 2014-08-26 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2015049934A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109891491A (en) * 2016-10-28 2019-06-14 雷马克伯有限公司 Interactive display
US10599987B2 (en) 2016-07-14 2020-03-24 King Fahd University Of Petroleum And Minerals Apparatuses, systems, and methodologies for permeability prediction
CN111857431A (en) * 2020-07-24 2020-10-30 青岛海信商用显示股份有限公司 Information display method, touch control equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06289993A (en) * 1993-03-30 1994-10-18 Matsushita Electric Ind Co Ltd Coordinate input display device
JPH09190275A (en) * 1996-01-12 1997-07-22 Nec Corp Handwritten input display device
JP2006178625A (en) * 2004-12-21 2006-07-06 Canon Inc Coordinate input device, its control method and program
JP2011100282A (en) * 2009-11-05 2011-05-19 Seiko Epson Corp Display device and program
JP2013025788A (en) * 2011-07-22 2013-02-04 Tpk Touch Solutions (Xiamen) Inc Touchscreen touch tracking device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06289993A (en) * 1993-03-30 1994-10-18 Matsushita Electric Ind Co Ltd Coordinate input display device
JPH09190275A (en) * 1996-01-12 1997-07-22 Nec Corp Handwritten input display device
JP2006178625A (en) * 2004-12-21 2006-07-06 Canon Inc Coordinate input device, its control method and program
JP2011100282A (en) * 2009-11-05 2011-05-19 Seiko Epson Corp Display device and program
JP2013025788A (en) * 2011-07-22 2013-02-04 Tpk Touch Solutions (Xiamen) Inc Touchscreen touch tracking device and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10599987B2 (en) 2016-07-14 2020-03-24 King Fahd University Of Petroleum And Minerals Apparatuses, systems, and methodologies for permeability prediction
US10885455B2 (en) 2016-07-14 2021-01-05 King Fahd University Of Petroleum And Minerals Method for predicting permeability and oil content in a geological formation
US11887019B2 (en) 2016-07-14 2024-01-30 King Fahd University Of Petroleum And Minerals Geological formation permeability prediction system
CN109891491A (en) * 2016-10-28 2019-06-14 雷马克伯有限公司 Interactive display
CN109891491B (en) * 2016-10-28 2022-07-15 雷马克伯有限公司 Method and apparatus for controlling interactive display
CN111857431A (en) * 2020-07-24 2020-10-30 青岛海信商用显示股份有限公司 Information display method, touch control equipment and storage medium
CN111857431B (en) * 2020-07-24 2023-10-27 青岛海信商用显示股份有限公司 Information display method, touch control equipment and storage medium

Similar Documents

Publication Publication Date Title
JP2015072534A (en) Information processor, and information processing method and program
US8847978B2 (en) Information processing apparatus, information processing method, and information processing program
US10095402B2 (en) Method and apparatus for addressing touch discontinuities
JP2019194892A (en) Crown input for wearable electronic devices
KR101126167B1 (en) Touch screen and method of displaying
US20130120282A1 (en) System and Method for Evaluating Gesture Usability
US20150149947A1 (en) Character deletion during keyboard gesture
JP2017529623A (en) Wet ink predictor
TWI567592B (en) Gesture recognition method and wearable apparatus
CN102902469A (en) Gesture recognition method and touch system
US10331326B2 (en) Apparatus that controls scrolling operation past an edge of an image based on a type of user input
US11402923B2 (en) Input method, apparatus based on visual recognition, and electronic device
WO2015049934A1 (en) Information processing device, information processing method, and program
US8896561B1 (en) Method for making precise gestures with touch devices
KR102198596B1 (en) Disambiguation of indirect input
CN104965657A (en) Touch control method and apparatus
US20130100158A1 (en) Display mapping modes for multi-pointer indirect input devices
US20190018503A1 (en) Cursor control method and cursor control system
US9927917B2 (en) Model-based touch event location adjustment
JP2016095795A (en) Recognition device, method, and program
TWI768407B (en) Prediction control method, input system and computer readable recording medium
US20220121277A1 (en) Contextual zooming
US20170285770A1 (en) Enhanced user interaction with a device
US10558270B2 (en) Method for determining non-contact gesture and device for the same
JP4236962B2 (en) Handwritten character drawing apparatus, handwritten character drawing method, and program for causing a computer to execute the handwritten character drawing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14851175

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14851175

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP