WO2021140575A1 - Head-mounted display for displaying ar object - Google Patents

Head-mounted display for displaying ar object Download PDF

Info

Publication number
WO2021140575A1
WO2021140575A1 PCT/JP2020/000198 JP2020000198W WO2021140575A1 WO 2021140575 A1 WO2021140575 A1 WO 2021140575A1 JP 2020000198 W JP2020000198 W JP 2020000198W WO 2021140575 A1 WO2021140575 A1 WO 2021140575A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
model image
image
mounted display
unit
Prior art date
Application number
PCT/JP2020/000198
Other languages
French (fr)
Japanese (ja)
Inventor
川前 治
伊藤 保
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2020/000198 priority Critical patent/WO2021140575A1/en
Publication of WO2021140575A1 publication Critical patent/WO2021140575A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a handwriting drawing support system.
  • HMD head-mounted display
  • AR Augmented Reality
  • AR object such as an avatar
  • AR object such as an avatar
  • the user can obtain various information by the sensor mounted on the HMD.
  • the information from the three-dimensional sensor built into the HMD makes it possible to present the augmented reality AR object by the computer in the real space as if it exists in the real space.
  • Patent Document 1 is a prior art document in this technical field.
  • Patent Document 1 describes a method of projecting digital information (image) on a piece of paper on a work surface by a projector to support handwriting drawing.
  • Patent Document 1 discloses that a model image is displayed to support handwriting drawing, but consideration is given to the model image for enlargement / reduction, rotation / deformation, and for a non-planar work surface. It wasn't done. Further, according to Patent Document 1, there is a description that the HMD may be used instead of the projector, but there is no description regarding its realization, and the feasibility of the HMD is unknown.
  • an object of the present invention is to construct a handwriting drawing support system that can easily create a handwritten image conforming to a model image by using an HMD.
  • the present invention is, for example, a head-mounted display whose display is controlled by a control unit, provided with a three-dimensional sensor, and the control unit grasps the shape of an object based on information from the three-dimensional sensor. Then, the AR object is displayed on the display unit according to the shape of the object.
  • FIG. It is an external view of the HMD in Example 1.
  • FIG. It is a hardware block diagram of the HMD in Example 1.
  • FIG. It is a functional block block diagram of HMD in Example 1.
  • FIG. It is a schematic diagram for demonstrating the operation state in Example 1.
  • FIG. It is a schematic diagram which showed the correspondence between the real space and the AR object on the display screen in Example 1.
  • FIG. It is a flowchart which shows the outline of the processing procedure in Example 1.
  • FIG. It is a flowchart which showed the detail of the model image selection process in Example 1.
  • FIG. It is a schematic diagram which shows the model image presentation state in Example 1.
  • FIG. It is a schematic diagram which shows the model image selection state in Example 1.
  • FIG. It is a schematic diagram which shows the erasing state of the non-selected model image in Example 1.
  • FIG. It is a flowchart which shows the detail of the model image transformation / movement processing in Example 1.
  • FIG. It is a flowchart which shows the detail of the model image handwriting processing in Example 1.
  • FIG. It is a schematic diagram which shows the state before the start of handwriting in Example 1.
  • FIG. It is a schematic diagram which shows the intermediate state of the handwriting in Example 1.
  • It is a schematic diagram which shows the handwritten image which finished the handwriting in Example 1.
  • FIG. It is a schematic diagram which shows the mismatch example of the model image and the handwritten image in Example 1.
  • FIG. It is a gesture operation correspondence table which showed the gesture operation in Example 1 and the processing content of the model image for the operation.
  • FIG. 1 is an external view showing the HMD in this embodiment.
  • the HMD1 has a transmissive display screen 75 at the lens position of the spectacles, and the situation in the real space is observed through the display screen 75.
  • an augmented reality AR object is displayed on the display screen 75. Therefore, the wearer of the HMD1 can simultaneously visually recognize both the augmented reality AR object displayed on the display screen 75 and the situation in the real space.
  • FIG. 2 is a hardware configuration diagram of the HMD in this embodiment.
  • the HMD 1 is composed of a main control device 2, a system bus 3, a storage device 4, a sensor device 5, a communication processing device 6, a video processing device 7, a voice processing device 8, and an operation input device 9.
  • the main control device 2 is a microprocessor unit that controls the entire HMD 1 according to a predetermined operation program. That is, each function or the like is realized by software by interpreting and executing an operation program in which the microprocessor unit realizes each function or the like.
  • the system bus 3 is a data communication path for transmitting and receiving various commands and data between the main control device 2 and each constituent block in the HMD1.
  • the storage device 4 stores various data such as a program unit 41 that stores an operation program for controlling the operation of the HMD 1, an operation setting value, a detection value from the sensor unit, an object including contents, and library information downloaded from the library. It is composed of various data units 42 to be stored and a rewritable program function unit 43 such as a work area used for various program operations.
  • a program unit 41 that stores an operation program for controlling the operation of the HMD 1, an operation setting value, a detection value from the sensor unit, an object including contents, and library information downloaded from the library. It is composed of various data units 42 to be stored and a rewritable program function unit 43 such as a work area used for various program operations.
  • the storage device 4 can store an operation program downloaded from the network and various data created by the operation program. In addition, it is possible to store contents such as moving images, still images, and sounds downloaded from the network. In addition, it is possible to store data such as moving images and still images taken by using the camera function. Further, the storage device 4 needs to hold the stored information even when the HMD 1 is not supplied with power from the outside. Therefore, for example, devices such as a semiconductor element memory such as a flash ROM or SSD (Solid State Drive), a magnetic disk drive such as an HDD (Hard Disc Drive), and the like are used.
  • the operation program may be stored in the program unit 41 or the like of the HMD1 in advance at the time of product shipment.
  • each operation program stored in the program unit 41 can be updated and its function can be expanded by a download process from each server device on the network.
  • the sensor device 5 is a sensor group of various sensors for detecting the state of the HMD1.
  • the sensor device 5 includes a GPS (Global Positioning System) receiving unit 51, a geomagnetic sensor unit 52, a three-dimensional sensor unit 53, an acceleration sensor unit 54, a gyro sensor unit 55, and the like. With these sensor groups, it is possible to detect the position, tilt, direction, movement, etc. of the HMD1. Further, the HMD1 may further include other sensors such as an illuminance sensor, an altitude sensor, and a proximity sensor.
  • the three-dimensional sensor unit 53 of this embodiment describes the phase difference method (phase shift method) as an example, but is not limited to this method.
  • the phase difference method is a method of irradiating an object with a plurality of modulated laser beams and measuring the distance to the object by the phase difference of the returning diffuse reflection component.
  • the three-dimensional sensor unit 53 can grasp the distance to each point of the object.
  • the communication processing device 6 is composed of a LAN (Local Area Network) communication unit 61 and a telephone network communication unit 62.
  • the LAN communication unit 61 is connected to a network network such as the Internet via an access point or the like, and transmits / receives data to / from each server device on the network.
  • the connection with the access point or the like may be made by a wireless connection such as Wi-Fi (registered trademark).
  • the telephone network communication unit 62 performs telephone communication (call) and data transmission / reception by wireless communication with a base station or the like of a mobile telephone communication network. Communication with base stations, etc.
  • the LAN communication unit 61 and the telephone network communication unit 62 each include a coding circuit, a decoding circuit, an antenna, and the like. Further, the communication processing device 6 may further include other communication units such as a Bluetooth (registered trademark) communication unit and an infrared communication unit.
  • the video processing device 7 is composed of an imaging unit 71 and a display unit 72.
  • the image pickup unit 71 inputs image data of the surroundings and an object by converting the light input from the lens into an electric signal using an electronic device such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) sensor. It is a camera unit that does.
  • the display unit 72 is a display device of a transmissive display using, for example, a laser projector and a half mirror, constitutes a display screen 75, and provides image data to a user of HMD1.
  • the voice processing device 8 is composed of a voice input / output unit 81, a voice recognition unit 82, and a voice decoding unit 83.
  • the voice input of the voice input / output unit 81 is a microphone, and the user's voice or the like is converted into voice data and input. Further, the voice output of the voice input / output unit 81 is a speaker, and outputs voice information and the like necessary for the user.
  • the voice recognition unit 82 analyzes the input voice information and extracts instruction commands and the like.
  • the voice decoding unit 83 has a function of performing decoding processing (speech synthesis processing) of the coded voice signal and the like, if necessary.
  • the operation input device 9 is an instruction input unit for inputting an operation instruction to the HMD1.
  • the operation input device 9 is composed of operation keys and the like in which button switches and the like are arranged. Other operating devices may be further provided.
  • the communication processing device 6 may be used to operate the HMD 1 by using a separate mobile terminal device connected by wired communication or wireless communication.
  • the voice recognition unit 82 of the voice processing device 8 may be used to operate the HMD 1 by a voice command of an operation instruction.
  • the HMD1 may be operated by analyzing the captured image of the imaging unit 71 of the image processing device 7 and performing an operation such as a gesture.
  • HMD1 shown in FIG. 2 includes many configurations that are not essential to this embodiment, the effect of this embodiment is not impaired even if these configurations are not provided. Further, configurations (not shown) such as a digital broadcast reception function and an electronic money payment function may be further added.
  • FIG. 3 is a functional block configuration diagram of the HMD in this embodiment.
  • the control unit 11 is mainly composed of the main control device 2, the program unit 41 of the storage device 4, and the program function unit 43, and constitutes a handwriting drawing support system.
  • the three-dimensional sensor information acquisition unit 12 is a function of acquiring information from the three-dimensional sensor unit 53 of the sensor device 5.
  • the information from the three-dimensional sensor unit 53 includes distance information from the HMD 1 to each point of the object.
  • the three-dimensional data processing unit 13 has a function of grasping the shape of the object based on the information from the three-dimensional sensor unit 53 (distance information from the HMD 1 to each point of the object).
  • the three-dimensional data storage unit 14 has a function of storing the three-dimensional data obtained by the three-dimensional data processing unit 13 in various data units 42 of the storage device 4.
  • the shooting data acquisition unit 15 is an imaging unit 71 of the video processing device 7, and has a function of shooting a real space and acquiring shooting data.
  • the shooting data acquisition unit 15 can acquire handwritten image data information and information such as gesture operation.
  • the communication processing unit 16 is composed of a LAN communication unit 61 and a telephone network communication unit 62 of the communication processing device 6, and can be uploaded to an external network server via the Internet network or from an external network server via the Internet network. It has a function to download various information.
  • the image of the AR object such as the model image is also downloaded from the external server or uploaded to the external server by using the communication processing unit 16. By downloading the model image from the external server to the various data units 42 of the storage device 4, various abundant model images can be presented to the user.
  • the AR image information storage unit 17 is a function of storing the AR image information obtained by the communication processing unit 16 in various data units 42 of the storage device 4.
  • the AR image generation processing unit 18 is a function of generating an AR object based on the AR object information stored in the AR image information storage unit 17.
  • the AR image generation processing unit 18 can perform processing such as enlargement / reduction, rotation / deformation of the model image.
  • the AR image display processing unit 19 is a function of displaying the AR object generated by the AR image generation processing unit 18 on the display screen 75 of the HMD1. Further, the model image (virtual image of the AR object) can be displayed along the shape of the handwritten surface based on the shape information of the handwritten surface stored in the three-dimensional data storage unit 14.
  • FIG. 4 is a schematic diagram for explaining an operating state in this embodiment.
  • the user 10 wearing the HMD1 visually recognizes the handwriting surface 28 via the display screen 75 of the HMD1.
  • the AR object displayed on the display screen 75 of the HMD1 can be visually recognized as a virtual image 27 of the AR object existing on the handwriting surface 28 by the user 10 wearing the HMD1.
  • the three-dimensional information of the handwriting surface 28 stored by the three-dimensional data storage unit 14 is the difference between the distance from the HMD 1 to the front side of the handwriting surface 28 and the distance from the HMD 1 to the back side of the handwriting surface 28 (to the front side).
  • the distance to the back side is longer than the distance of.
  • FIG. 5 is a schematic diagram showing the correspondence between the real space and the AR object on the display screen, although it is slightly deformed.
  • the rectangular AR object 22 displayed on the display screen 75 of the HMD1 has a shorter side length on the back side than the length on the front side. It is displayed (perspective display), and the thickness of the back side is displayed thinner than the thickness of the front side (perspective display), so it is as if it exists on the handwritten surface 28.
  • the virtual image 27 of the AR object can be visually recognized.
  • step S420 a model image selection process is performed to select a model image to be handwritten.
  • the details of the model image selection process S420 will be described later.
  • step S430 it is determined whether or not there is an instruction such as deformation / movement for the selected model image.
  • step S440 If there is an instruction to transform / move the selected model image in the process of S430, the model image is transformed / moved in step S440 such as enlargement / reduction / rotation / transformation / position movement. Confirm the model image to be handwritten. The details of the model image transformation / movement process S440 will be described later.
  • step S430 if there is no instruction to transform / move the selected model image, that is, if the selected model image can be used as it is without being transformed / moved, the process proceeds to step S460.
  • step S460 is a model image handwriting process that supports handwriting drawing when handwriting on the handwriting paper on the handwriting surface, and supports handwriting drawing by tracing the model image.
  • the details of the model image handwriting process S460 will be described later. Further, even during the execution of the model image handwriting process S460, instructions such as deformation and movement of the model image are effective, and the optimum model image can be presented each time. This completes the outline of the processing procedure in HMD1.
  • FIG. 7 is a flowchart showing the details of the model image selection process S420.
  • the shape grasping process (S422) of the handwritten surface is performed.
  • the three-dimensional sensor information acquisition unit 12 measures the distance from the HMD 1 to each point of the handwriting surface 28, and the three-dimensional data processing unit 13 grasps the shape of the handwriting surface 28. It is a process.
  • the shape of the handwriting surface 28 obtained here is stored in various data units 42 of the storage device 4 by the three-dimensional data storage unit 14. If the shape of the handwritten surface has already been grasped and stored in various data units 42 of the storage device 4, the stored shape information of the handwritten surface is used.
  • the shape of the handwriting surface 28 in this embodiment assumes a rectangular plane. Since the handwriting surface 28 is rectangular, the shape of the handwriting surface 28 can be grasped by grasping the positions of the four corners. Further, once the positions of the four points are grasped, even if one point is out of the measurable range for borrowing, the position of the fourth point can be estimated from the positions of the remaining three points. Instead of grasping the positions of the four corners, the shape of the hand-drawn surface 28 may be grasped by grasping the positions of the four sides.
  • FIG. 8 is a schematic view showing the first model image 31 and the second model image 32 on the handwritten surface 28.
  • the number of models to be selected is two, and the first model image 31 is a rectangle and the second model image is a triangle, and illustration line drawings are used.
  • the video information from the shooting data acquisition unit 15 is acquired, the gesture operation is analyzed, and the model image to be presented is selected and displayed.
  • the gesture motion of the finger or the entire hand is targeted. That is, the screen of the model image to be presented can be exchanged by the gesture operation of sliding the entire hand to the left or right. For example, when the gesture operation of sliding the entire hand to the left is performed, the screen of the immediately preceding model image is presented, and when the gesture operation of sliding the entire hand to the right is performed, the screen of the next model image is presented.
  • FIG. 9 is a schematic view showing a state in which the first model image 31 presented on the handwriting surface 28 is selected.
  • the model image 31 is selected by the finger 39 pointing to the first model image 31 presented on the handwriting surface 28.
  • AR that acquires the image information of the finger 39 and the three-dimensional distance information from the shooting data acquisition unit 15 and the three-dimensional sensor information acquisition unit and displays them on the display screen 75 of the HMD1 by the AR image display processing unit 19.
  • the image of the object is processed and displayed. Therefore, in the schematic view of FIG. 9, it is shown that the first model image 31 (virtual image of the AR object) existing under the finger 39 can be visually recognized as if it is hidden.
  • the selection instruction is performed by the gesture operation of the finger 39. Specifically, when it is detected on the first model image 31 that the distance within the range of the hand-drawn surface 28 and the finger 39 are at the same distance, and the finger 39 indicates the model image 31 , The selection can be instructed by the gesture operation of double-clicking the finger 39.
  • the non-selection model image erasing process (S425) for erasing the non-selection model image presented on the handwritten surface 28 is performed.
  • S425 the non-selection model image erasing process for erasing the non-selection model image presented on the handwritten surface 28 is performed.
  • an example of selecting the model image 31 on the hand-drawn surface 28 is shown, but it is not always necessary to select the model image 31 on the hand-drawn surface 28, and the model image 31 may be selected on the display screen 75 of the HMD1. ..
  • FIG. 10 is a schematic view showing a state in which only the selected first model image 31 is presented after being erased from the handwritten surface 28 because the second model image 32 was not selected. This completes the model image selection process S420.
  • FIG. 11 is a flowchart showing the details of the model image transformation / movement process S440.
  • the procedure for analyzing the instruction by the gesture operation is performed in the order of enlargement / reduction, rotation / deformation, and position movement, but the procedure is not particularly limited to this order.
  • step S422 it is determined whether or not the given gesture operation instruction is an enlargement or reduction instruction. If the given instruction is an instruction for enlargement or reduction, the enlargement / reduction process (S443) is performed to enlarge or reduce the model image and present it. When it recognizes the gesture movement that opens between the thumb and index finger, it enlarges and presents the model image. In addition, when it recognizes the gesture motion of closing between the thumb and index finger, the model image is reduced and presented.
  • the enlargement ratio and reduction ratio in this embodiment depend on the magnitude of the gesture operation, but may be any preset enlargement ratio or reduction ratio.
  • the given gesture operation instruction is a rotation or deformation instruction (process S424). If the given gesture operation instruction is a rotation or transformation instruction, the rotation / transformation process (S445) is performed to rotate or transform the model image and present it.
  • the rotation angle in this embodiment depends on the arc drawn by the finger, but may be any preset rotation angle.
  • the model image is deformed and presented in the sliding direction of the finger.
  • the given instruction is a position movement (process S446). If the given instruction is a position move, the position move process (S447) is performed to move the position of the model image.
  • the position move process S447 is performed to move the position of the model image.
  • the moving distance in this embodiment depends on the magnitude of the gesture movement, but may be any preset moving distance. This completes the model image transformation / movement process S440.
  • FIG. 12 is a flowchart showing the details of the model image handwriting process S460.
  • the handwriting paper shape grasping process (S462) is a process in which the three-dimensional sensor information acquisition unit 12 measures the distance from the HMD 1 to each point of the handwriting paper, and the three-dimensional data processing unit 13 grasps the shape of the handwriting paper. is there.
  • the shape of the handwritten paper obtained here is stored in various data units 42 of the storage device 4 by the three-dimensional data storage unit 14. If the shape of the handwriting paper has already been grasped and stored in various data units 42 of the storage device 4, the shape information of the stored handwriting paper is used.
  • the shape of the handwriting paper in this embodiment is assumed to be a rectangular plane. Since it is a rectangular plane, the shape of the handwriting paper can be grasped by grasping the positions of the four corners of the handwriting paper. Of course, once the overall shape of the hand-drawn paper and the distance to each point on the paper can be measured, the shape of the hand-drawn paper is not limited to a rectangular plane.
  • FIG. 13 is a schematic view in which the handwriting paper 25 is placed on the handwriting surface 28, and the model image 35 of the rabbit, which is a virtual image of the AR object, is visually recognized.
  • the model image transformation / movement process (S440) is executed again to perform the model image. Image 35 is optimized.
  • FIG. 14 is a schematic view showing a state in which the outer shape (face) of the model image 35 of the rabbit is handwritten using the pen 38.
  • video information is acquired from the shooting data acquisition unit 15, and what part and how much is handwritten is analyzed.
  • the handwritten portion 36 (the face portion of the rabbit) handwritten by tracing the model image 35 of the rabbit is expressed as a slightly thicker line.
  • the portion of the rabbit model image 35 that overlaps with the handwritten portion 36 is erased and displayed at the stage of the AR object on the display screen 75 of the HMD1. As a result, the corresponding portion of the rabbit model image 35 that overlaps with the handwritten portion 36 is erased from the handwritten paper 25 and presented.
  • FIG. 15 is a schematic view showing a handwritten image 37 in a state where handwriting is completed by tracing all the model images 35 of the rabbit.
  • the rabbit model image 35 is all handwritten (handwritten image 37)
  • the AR object is not displayed on the display screen 75 of the HMD1. Therefore, the rabbit model image 35 is not presented on the handwriting paper 25.
  • the model image 35 of the rabbit and the handwritten image 37 are shown to be in perfect agreement with each other. This completes the model image handwriting process S460.
  • drawings of this embodiment are schematic drawings that are handwritten only in black, it goes without saying that any color can be used and the rabbit can be colored with any color. It is also possible to match the line thickness of the model image with the line thickness of the handwritten image. It is also possible to add a center line (a line representing the center of the width of the line) to the line of the model image with a dotted line, a broken line, a one-dot difference line, etc., and present it in a color (including white) different from that of the model image.
  • a center line a line representing the center of the width of the line
  • the schematic diagram of FIG. 16 is a schematic diagram showing a state in which the model image 35 of the rabbit and the handwritten image 37 do not completely match.
  • the rabbit face portion of the handwritten image 37 does not match the rabbit model image 35, and in order to clarify the state, the rabbit model image that does not match is shown in this schematic diagram.
  • Part 33 of 35 is shown by a broken line. This allows the user after handwriting to clearly recognize which part is different from the model image.
  • the model image 35 and the handwritten image 37 are superimposed and visually recognized, but the model image 35 and the handwritten image 37 can be visually recognized side by side.
  • the color of the model image 35 may be different from that of the handwritten image 37
  • the line thickness and style of the model image 35 may be changed, or the model image 35 may be blinked to obtain the handwritten image 37.
  • the difference from the model image 35 can be clarified.
  • FIG. 17 is a gesture motion correspondence table showing the gesture motion in this embodiment and the processing content of the model image for the gesture motion.
  • the gesture operation correspondence table 500 is composed of a gesture operation list 501 used in this embodiment and a processing list 502 showing the processing contents of a model image for the gesture operation.
  • gesture operation when a gesture operation 503 in which an open hand is placed on a handwritten surface 28 on which a model image is presented and the hand is moved to the right is executed, the processing content related to the model image for the gesture operation 503 is executed. 504 means to present a screen showing the next model image.
  • the description of other gesture operations described in the gesture operation correspondence table 500 and the processing contents of the model image for the gesture operation will be omitted here. It should be noted that each of the presented gesture actions is only an example, and of course, other gesture actions can be enabled.
  • the shape of the handwritten surface such as paper for creating a handwritten image is grasped by the information from the three-dimensional sensor built in the HMD, and the shape of the handwritten surface is as if it were in the real space. It presents an augmented reality AR object (model image) by a computer on the handwritten surface as if it exists, and supports the creation of a handwritten image by superimposing the model image and the handwritten image.
  • the HMD control process performs processing such as enlargement / reduction and rotation / deformation of the model image.
  • the model image can be arranged in the optimum size and the optimum position, and the handwritten image conforming to the model image can be easily created.
  • a support system can be provided.
  • This embodiment describes a method of instructing the processing content of the model image by a method other than the gesture operation.
  • the basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
  • a voice command is used to instruct the processing content of the model image.
  • the voice command is extracted as a voice command by analyzing the user's voice input via the voice input / output unit 81 of the voice processing device 8 by the voice recognition unit 82.
  • FIG. 18 is a correspondence table of voice commands used in this embodiment.
  • the voice command correspondence table 520 is composed of a voice command list 521, a meaning list 522 showing the meaning of each voice command, and a processing list 523 showing the processing contents for the model image corresponding to each voice command. ..
  • the processing content 525 regarding the model image for the voice command means that the screen of the previous model image is presented.
  • the other voice commands listed in the voice command correspondence table 520 and the processing contents of the model image for the voice commands will not be described here. Further, each voice command is only an example, and of course, other voice commands can be enabled.
  • voice response corresponding to a voice command by using the voice synthesis function of the voice decoding unit 83 of the voice processing device 8.
  • the voice synthesis function of the voice decoding unit 83 of the voice processing device 8 for example, if the user responds with the same voice (echolalia) as the input voice command, or confirms the input voice command, for example, if the user's response to "XX Desune?" Is "high”. If the user's response to "XX Wojikkoushimasu” and “XX Desune?" Is "Yeah”, a voice response such as "XX Wotrikeshimasu” is possible.
  • voice commands (“high”, “yes”) related to voice response are not posted, but it goes without saying that they are supported.
  • the processing content of the model image can be instructed by using a voice command, it is possible to select the selection of the model image without performing the gesture operation.
  • there are other methods for instructing the processing content of the model image For example, it can be realized by mounting a touch sensor on the side of the HMD1 (glasses-shaped vine portion) and performing a slide operation or a click operation up, down, left or right.
  • Example 1 for simplification, the model image uses figures such as rectangles and triangles and illustration line drawings such as rabbits, but other images can also be used as model images.
  • the handwriting support system described in Examples 1 and 2 is applied to penmanship will be described.
  • the basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
  • FIG. 19 is a schematic diagram showing the procedure when the character "Ei”, which is said to be the basis of penmanship and is well known as the Eight Principles of Eight, is used as a model for penmanship.
  • FIG. 19 shows a procedure for learning the stroke order of the five-stroke kanji character “Ei”.
  • the first stroke (dot) of the character "eternal” is displayed in white as a model image.
  • the user traces the first image (dot) of the model image displayed in white and handwrites it with a brush.
  • the model image of FIG. 19 (c) is the third image (horizontal image rising to the right, left side), the model image of FIG. 19 (d) is the fourth image (short left image), and the model image of FIG. 19 (e).
  • the fifth image (right side) is shown in white. The user performs handwriting with a brush according to the presentation order of the model images displayed in white.
  • FIG. 19 (f) shows a state in which the handwriting of the character "Ei" is completed and only the handwritten character "Ei" 606 is written.
  • the stroke order of kanji can be easily learned visually.
  • the black ink may be slightly lighter or the color may be changed.
  • the model image is sequentially displayed for each screen, but the entire model image can be displayed and used as the model image. Further, in this embodiment, a brush is used, but it goes without saying that another writing tool such as a pen may be used.
  • a model image can be used as a photograph.
  • the same processing as the above-mentioned model image can be performed, and the handwriting drawing support system of the present invention can be realized.
  • the state of the handwritten paper for writing in this embodiment means, for example, the state of the schematic diagram of FIG. 14 (partially handwritten state).
  • the model image 35 of the rabbit is displayed in the movement direction (rotation angle in the case of rotation) of the handwriting paper 25. ) And the movement including the rotation according to the movement distance.
  • the corresponding part of the rabbit model image 35 that overlaps with the handwritten part 36 is moved to a position that overlaps with the handwritten part 36, and is handwritten on the handwriting paper 25.
  • the relative positions of the finished portion 36 and the rabbit model image 35 are in agreement with each other.
  • the following method is applied to such cases as well. That is, when the three-dimensional sensor unit 53 of the sensor device 5 detects that the distance or angle between the handwriting paper 25 and the HMD1 has changed, the three-dimensional sensor unit 53 of the sensor device 5 again determines the shape of the handwriting paper 25. By grasping the model image 35, the model image 35 is corrected and presented as if it exists on the handwriting paper 25.
  • the handwriting paper 25 is tilted vertically from the handwriting surface, it is the same as when the distance or angle between the handwriting paper 25 and the HMD1 changes, as if the model image 35 exists on the handwriting paper 25. , The model image 35 is corrected, and the model image 35 tilted in the vertical direction from the handwriting surface is presented and realized.
  • the model image is as if it exists on the handwriting paper. Since it can be displayed, the user can use the handwriting drawing support system without any discomfort.
  • the model image is visually recognized as a virtual image of the AR object on the handwritten surface 28 as if the model image exists on the handwritten surface 28. can do.
  • FIG. 20 shows the display on the display screen 75 of the HMD1 in this embodiment.
  • the model image 35 and the handwritten image 37 are displayed side by side on the display screen 75.
  • the line-of-sight destination of the user wearing the HMD 1 is detected by the gyro sensor unit 55 of the sensor device 5, a line-of-sight sensor (not shown), or the like, and the user can use the handwriting surface and handwriting paper.
  • the handwritten image on the handwriting paper is imaged by the image pickup unit 71, taken into various data units 42 in the storage device 4, and the captured image is displayed on the display screen 75, triggered by the case where the face is raised.
  • the handwritten image to be displayed may be an image in the middle of handwriting.
  • the model image and the handwritten image can be compared even if the user does not always pay attention to the handwritten surface and the handwritten paper.
  • the shape of the handwriting surface and the shape of the handwriting paper are described as a rectangular plane, but a non-planar shape other than the rectangular plane is also possible. Therefore, in this embodiment, a case where the handwriting surface and the handwriting paper are non-planar will be described.
  • the basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
  • FIG. 21 is a schematic view of the present embodiment in which the handwriting surface 621 has a non-planar cylindrical shape, and the handwriting paper 622 is arranged on the cylinder.
  • the handwriting paper 622 is curved in a curved surface (arc) shape along the surface of the cylinder.
  • the three-dimensional sensor unit 53 of the sensor device 5 grasps that the handwriting surface 621 is a cylindrical curved surface (see processing S422 in FIG. 7), and further, the handwriting paper 622 is curved in a curved surface shape. It is understood that this is done (see process S462 in FIG. 12).
  • the model image (virtual image of the AR object) is formed on the curved surface of the handwriting paper 622 based on the handwriting paper shape obtained by this handwriting paper shape grasping process (see processing S462 in FIG. 12). Present by curving along.
  • the handwriting paper 622 was used, but it goes without saying that the handwriting surface 621 of the cylinder can be directly handwritten with paint or the like.
  • the model image can be presented on a non-planar surface such as a plate or a cup along the shape of the non-plane, handwriting is performed by tracing the model image presented on the non-planar surface such as a plate or a cup. Can be done.
  • the configuration for realizing the technique of the present invention is not limited to the present examples, and various modifications can be considered.
  • numerical values and messages appearing in sentences and figures are merely examples, and the effects of the present invention may not be impaired even if different ones are used.
  • the above-mentioned functions and the like of the present invention may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, the software processing described in the examples and the hardware may be used together.
  • the present invention is not limited to the glasses-type HMD, and may be any information processing device having a display device in front of the eyeball, and may be, for example, a goggle type or a contact lens type.
  • HMD 2 Main control device 3: System bus 4: Storage device 5: Sensor device, 6: Communication processing device, 7: Video processing device, 8: Voice processing device, 9: Operation input device, 11 : Control unit, 12: 3D sensor information acquisition unit, 13: 3D data processing unit, 18: AR image generation processing unit, 19: AR image display processing unit, 22: AR object, 25: Handwritten paper, 28: Handwritten Surface, 31, 32, 35: Model image, 37: Handwritten image, 42: Various data units, 53: Three-dimensional sensor unit, 81: Audio input / output unit, 82: Audio recognition unit.

Abstract

The purpose of the present invention is to construct a handwriting drawing support system that can easily create a handwriting image conforming to a model image, by using an HMD. In order to achieve the purpose, provided is a head-mounted display of which displaying is controlled by a control unit and which is provided with a three-dimensional sensor. The control unit is configured to recognize the shape of a subject on the basis of information from the three-dimensional sensor and display, on a display unit, an AR object according to the shape of the subject.

Description

[規則37.2に基づきISAが決定した発明の名称] ARオブジェクトを表示するヘッドマウントディスプレイ[Name of invention determined by ISA based on Rule 37.2.] Head-mounted display that displays AR objects
 本発明は、手書き描画支援システムに関する。 The present invention relates to a handwriting drawing support system.
 近年、スマートフォンを代表とする携帯型情報端末には種々の製品が世の中に出回っている。中でも、ヘッドマウントディスプレイ(以下「HMD」と記載する。)では、眼鏡形式の表示画面上に、実空間の映像と、コンピュータによる拡張現実(AR:Augmented Reality)の生成画像(アバター等のARオブジェクト)とを重畳させて表示することができる。更に、ユーザは、HMDに装着されたセンサにより、種々の情報を得ることができる。その結果、HMDに内蔵された3次元センサからの情報により、コンピュータによる拡張現実のARオブジェクトを、あたかも実空間に存在するが如く、実空間に提示させることが可能となっている。 In recent years, various products have been on the market for portable information terminals such as smartphones. Among them, in a head-mounted display (hereinafter referred to as "HMD"), a real-space image and an augmented reality (AR: Augmented Reality) generated image (AR object such as an avatar) are displayed on a glasses-type display screen. ) Can be superimposed and displayed. Further, the user can obtain various information by the sensor mounted on the HMD. As a result, the information from the three-dimensional sensor built into the HMD makes it possible to present the augmented reality AR object by the computer in the real space as if it exists in the real space.
 一方、従来、お手本画像を下絵にして、そのお手本画像を手書きで忠実になぞることにより、お手本画像に準拠した手書き画像を作成することができるが、お手本画像を拡大・縮小したり回転・変形したりをすることが困難であった。 On the other hand, conventionally, by using a model image as a sketch and faithfully tracing the model image by handwriting, it is possible to create a handwritten image conforming to the model image, but the model image is enlarged / reduced, rotated / deformed. It was difficult to do the handwriting.
 本技術分野における先行技術文献として特許文献1がある。特許文献1では、プロジェクタにより、デジタル情報(画像)を作業面上の紙の上に投影し、手書き描画を支援する方法が記載されている。 Patent Document 1 is a prior art document in this technical field. Patent Document 1 describes a method of projecting digital information (image) on a piece of paper on a work surface by a projector to support handwriting drawing.
国際公開2014/073346号公報International Publication No. 2014/0733346
 特許文献1は、お手本画像を表示して手書き描画を支援する点が開示されているが、お手本画像に関して、拡大・縮小や回転・変形に対してや、非平面の作業面に対して、配慮がされてなかった。また、特許文献1によれば、プロジェクタの替わりにHMDを用いてもよいとの記載があるが、その具現化に関する記載が無く、HMDよる実現性が不明であった。 Patent Document 1 discloses that a model image is displayed to support handwriting drawing, but consideration is given to the model image for enlargement / reduction, rotation / deformation, and for a non-planar work surface. It wasn't done. Further, according to Patent Document 1, there is a description that the HMD may be used instead of the projector, but there is no description regarding its realization, and the feasibility of the HMD is unknown.
 上記課題に鑑み、本発明の目的は、HMDを利用し、お手本画像に準拠した手書き画像を容易に作成できる手書き描画支援システムを構築することである。 In view of the above problems, an object of the present invention is to construct a handwriting drawing support system that can easily create a handwritten image conforming to a model image by using an HMD.
 本発明は、その一例を挙げるならば、制御部により表示を制御するヘッドマウントディスプレイであって、3次元センサを設け、制御部は、3次元センサからの情報に基づいて対象物の形状を把握し、対象物の形状に沿ってARオブジェクトを表示部に表示する構成とする。 The present invention is, for example, a head-mounted display whose display is controlled by a control unit, provided with a three-dimensional sensor, and the control unit grasps the shape of an object based on information from the three-dimensional sensor. Then, the AR object is displayed on the display unit according to the shape of the object.
 本発明によれば、お手本画像に準拠した手書き画像を容易に作成することが可能となる手書き描画支援システムを提供できる。 According to the present invention, it is possible to provide a handwriting drawing support system that enables easy creation of a handwritten image based on a model image.
実施例1におけるHMDの外観図である。It is an external view of the HMD in Example 1. FIG. 実施例1におけるHMDのハードウェア構成図である。It is a hardware block diagram of the HMD in Example 1. FIG. 実施例1におけるHMDの機能ブロック構成図である。It is a functional block block diagram of HMD in Example 1. FIG. 実施例1における動作状況を説明するための模式図である。It is a schematic diagram for demonstrating the operation state in Example 1. FIG. 実施例1における実空間と表示画面上のARオブジェクトとの対応を示した模式図である。It is a schematic diagram which showed the correspondence between the real space and the AR object on the display screen in Example 1. FIG. 実施例1における処理手順の概要を示すフローチャートである。It is a flowchart which shows the outline of the processing procedure in Example 1. FIG. 実施例1におけるお手本画像選択処理の詳細を示したフローチャートである。It is a flowchart which showed the detail of the model image selection process in Example 1. FIG. 実施例1におけるお手本画像提示状態を示す模式図である。It is a schematic diagram which shows the model image presentation state in Example 1. FIG. 実施例1におけるお手本画像選択状態を示す模式図である。It is a schematic diagram which shows the model image selection state in Example 1. FIG. 実施例1における非選択お手本画像の消去状態を示す模式図である。It is a schematic diagram which shows the erasing state of the non-selected model image in Example 1. FIG. 実施例1におけるお手本画像変形・移動処理の詳細を示すフローチャートである。It is a flowchart which shows the detail of the model image transformation / movement processing in Example 1. FIG. 実施例1におけるお手本画像手書き処理の詳細を示すフローチャートである。It is a flowchart which shows the detail of the model image handwriting processing in Example 1. FIG. 実施例1における手書きが開始する前の状態を示す模式図である。It is a schematic diagram which shows the state before the start of handwriting in Example 1. FIG. 実施例1における手書きが途中状態を示す模式図である。It is a schematic diagram which shows the intermediate state of the handwriting in Example 1. 実施例1における手書きが終了した手書き済み画像を示す模式図である。It is a schematic diagram which shows the handwritten image which finished the handwriting in Example 1. FIG. 実施例1におけるお手本画像と手書き画像の不一致例を示す模式図である。It is a schematic diagram which shows the mismatch example of the model image and the handwritten image in Example 1. FIG. 実施例1におけるジェスチャ動作とその動作に対するお手本画像の処理内容を示したジェスチャ動作対応テーブルである。It is a gesture operation correspondence table which showed the gesture operation in Example 1 and the processing content of the model image for the operation. 実施例2における音声コマンドとそれに対するお手本画像の処理内容を示した音声コマンド対応テーブルである。It is a voice command correspondence table which showed the voice command in Example 2 and the processing content of the model image for it. 実施例3における手書き描画支援システムを習字に適用した例を示す模式図である。It is a schematic diagram which shows the example which applied the handwriting drawing support system in Example 3 to a penmanship. 実施例5におけるHMDの表示画面にお手本画像と手書き済み画像とを並べて表示している模式図である。It is a schematic diagram which displays the model image and the handwritten image side by side on the display screen of the HMD in Example 5. 実施例6おける手書き描画支援システムを非平面に適用した例を示す模式図である。6 is a schematic diagram showing an example in which the handwriting drawing support system in the sixth embodiment is applied to a non-planar surface.
 以下、本発明の実施例を、図面を用いて説明する。 Hereinafter, examples of the present invention will be described with reference to the drawings.
 図1は、本実施例におけるHMDを示す外観図である。図1において、HMD1には、眼鏡のレンズ位置に透過型の表示画面75があり、表示画面75を介して、実空間の状況を観察する。また、表示画面75には、拡張現実のARオブジェクトを表示する。従って、HMD1の装着者は、表示画面75に表示された拡張現実のARオブジェクトと、実空間の状況の両者を同時に視認することができる。 FIG. 1 is an external view showing the HMD in this embodiment. In FIG. 1, the HMD1 has a transmissive display screen 75 at the lens position of the spectacles, and the situation in the real space is observed through the display screen 75. In addition, an augmented reality AR object is displayed on the display screen 75. Therefore, the wearer of the HMD1 can simultaneously visually recognize both the augmented reality AR object displayed on the display screen 75 and the situation in the real space.
 図2は、本実施例におけるHMDのハードウェア構成図である。図2において、HMD1は、主制御装置2、システムバス3、記憶装置4、センサ装置5、通信処理装置6、映像処理装置7、音声処理装置8、操作入力装置9で構成される。 FIG. 2 is a hardware configuration diagram of the HMD in this embodiment. In FIG. 2, the HMD 1 is composed of a main control device 2, a system bus 3, a storage device 4, a sensor device 5, a communication processing device 6, a video processing device 7, a voice processing device 8, and an operation input device 9.
 主制御装置2は、所定の動作プログラムに従ってHMD1全体を制御するマイクロプロセッサユニットである。すなわち、マイクロプロセッサユニットがそれぞれの機能等を実現する動作プログラムを解釈して実行することによりそれぞれの機能等をソフトウェアで実現する。 The main control device 2 is a microprocessor unit that controls the entire HMD 1 according to a predetermined operation program. That is, each function or the like is realized by software by interpreting and executing an operation program in which the microprocessor unit realizes each function or the like.
 システムバス3は、主制御装置2とHMD1内の各構成ブロックとの間で各種コマンドやデータなどの送受信を行うためのデータ通信路である。 The system bus 3 is a data communication path for transmitting and receiving various commands and data between the main control device 2 and each constituent block in the HMD1.
 記憶装置4は、HMD1の動作を制御するための動作プログラムなどを記憶するプログラム部41、動作設定値やセンサ部からの検出値やコンテンツを含むオブジェクトやライブラリからダウンロードしたライブラリ情報などの各種データを記憶する各種データ部42、各種プログラム動作で使用するワークエリアなどの書き替え可能なプログラム機能部43から構成している。 The storage device 4 stores various data such as a program unit 41 that stores an operation program for controlling the operation of the HMD 1, an operation setting value, a detection value from the sensor unit, an object including contents, and library information downloaded from the library. It is composed of various data units 42 to be stored and a rewritable program function unit 43 such as a work area used for various program operations.
 また、記憶装置4は、ネットワーク上からダウンロードした動作プログラムや動作プログラムで作成した各種データ等を記憶可能である。また、ネットワーク上からダウンロードした動画や静止画や音声等のコンテンツを記憶可能である。また、カメラ機能を使用して撮影した動画や静止画等のデータを記憶可能である。また、記憶装置4は、HMD1に外部から電源が供給されていない状態であっても記憶している情報を保持する必要がある。したがって、例えば、フラッシュROMやSSD(Solid State Drive)などの半導体素子メモリ、HDD(Hard Disc Drive)などの磁気ディスクドライブ、等のデバイスが用いられる。なお、動作プログラムは、製品出荷の時点で、予めHMD1のプログラム部41等に格納された状態であっても良い。また、製品出荷後に、インターネット上の各種サーバ装置等から取得するものであっても良い。また、メモリカードや光ディスク等で提供される動作プログラムを取得するものであっても良い。また、プログラム部41に記憶された各動作プログラムは、ネットワーク上の各サーバ装置からのダウンロード処理により更新及び機能拡張することが可能である。 Further, the storage device 4 can store an operation program downloaded from the network and various data created by the operation program. In addition, it is possible to store contents such as moving images, still images, and sounds downloaded from the network. In addition, it is possible to store data such as moving images and still images taken by using the camera function. Further, the storage device 4 needs to hold the stored information even when the HMD 1 is not supplied with power from the outside. Therefore, for example, devices such as a semiconductor element memory such as a flash ROM or SSD (Solid State Drive), a magnetic disk drive such as an HDD (Hard Disc Drive), and the like are used. The operation program may be stored in the program unit 41 or the like of the HMD1 in advance at the time of product shipment. Further, it may be acquired from various server devices or the like on the Internet after the product is shipped. Further, the operation program provided by the memory card, the optical disk, or the like may be acquired. Further, each operation program stored in the program unit 41 can be updated and its function can be expanded by a download process from each server device on the network.
 センサ装置5は、HMD1の状態を検出するための各種センサのセンサ群である。センサ装置5は、GPS(Global Positioning System)受信部51、地磁気センサ部52、3次元センサ部53、加速度センサ部54、ジャイロセンサ部55等で構成される。これらのセンサ群により、HMD1の位置、傾き、方角、動き、等を検出することが可能となる。また、HMD1が、照度センサ、高度センサ、近接センサ等、他のセンサを更に備えていても良い。 The sensor device 5 is a sensor group of various sensors for detecting the state of the HMD1. The sensor device 5 includes a GPS (Global Positioning System) receiving unit 51, a geomagnetic sensor unit 52, a three-dimensional sensor unit 53, an acceleration sensor unit 54, a gyro sensor unit 55, and the like. With these sensor groups, it is possible to detect the position, tilt, direction, movement, etc. of the HMD1. Further, the HMD1 may further include other sensors such as an illuminance sensor, an altitude sensor, and a proximity sensor.
 本実施例の3次元センサ部53は、位相差方式(フェィズシフト方式)を例に記載するがこの方式に限定しない。位相差方式は、変調した複数のレーザ光を対象物に照射し、帰ってくる拡散反射成分の位相差で対象物との距離を測定する方式である。この3次元センサ部53により、対象物の各ポイントまでの距離を把握することができる。 The three-dimensional sensor unit 53 of this embodiment describes the phase difference method (phase shift method) as an example, but is not limited to this method. The phase difference method is a method of irradiating an object with a plurality of modulated laser beams and measuring the distance to the object by the phase difference of the returning diffuse reflection component. The three-dimensional sensor unit 53 can grasp the distance to each point of the object.
 通信処理装置6は、LAN(Local Area Network)通信部61、電話網通信部62、で構成される。LAN通信部61は、アクセスポイント等を介してインターネット等のネットワーク網と接続され、前記ネットワーク網上の各サーバ装置とデータの送受信を行う。アクセスポイント等との接続はWi-Fi(登録商標)等の無線接続で行われても良い。電話網通信部62は、移動体電話通信網の基地局等との無線通信により、電話通信(通話)及びデータの送受信を行う。基地局等との通信はW-CDMA(Wideband Code Division Multiple Access)(登録商標)方式やGSM(Global System for Mobile communications)(登録商標)方式、LTE(Long Term Evolution)方式、或いはその他の通信方式によって行われても良い。LAN通信部61と電話網通信部62は、それぞれ符号化回路や復号回路やアンテナ等を備える。また、通信処理装置6が、BlueTooth(登録商標)通信部や赤外線通信部等、他の通信部を更に備えていても良い。 The communication processing device 6 is composed of a LAN (Local Area Network) communication unit 61 and a telephone network communication unit 62. The LAN communication unit 61 is connected to a network network such as the Internet via an access point or the like, and transmits / receives data to / from each server device on the network. The connection with the access point or the like may be made by a wireless connection such as Wi-Fi (registered trademark). The telephone network communication unit 62 performs telephone communication (call) and data transmission / reception by wireless communication with a base station or the like of a mobile telephone communication network. Communication with base stations, etc. is W-CDMA (Wideband Code Division Multiple Access) (registered trademark) method, GSM (Global System for Mobile communications) (registered trademark) method, LTE (Long Term Evolution) method, or other communication methods. May be done by. The LAN communication unit 61 and the telephone network communication unit 62 each include a coding circuit, a decoding circuit, an antenna, and the like. Further, the communication processing device 6 may further include other communication units such as a Bluetooth (registered trademark) communication unit and an infrared communication unit.
 映像処理装置7は、撮像部71、表示部72、で構成される。撮像部71は、CCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)センサ等の電子デバイスを用いてレンズから入力した光を電気信号に変換することにより、周囲や対象物の画像データを入力するカメラユニットである。表示部72は、例えばレーザープロジェクターとハーフミラー等を使った透過型ディスプレイの表示デバイスであり、表示画面75を構成し、画像データをHMD1のユーザに提供する。 The video processing device 7 is composed of an imaging unit 71 and a display unit 72. The image pickup unit 71 inputs image data of the surroundings and an object by converting the light input from the lens into an electric signal using an electronic device such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) sensor. It is a camera unit that does. The display unit 72 is a display device of a transmissive display using, for example, a laser projector and a half mirror, constitutes a display screen 75, and provides image data to a user of HMD1.
 音声処理装置8は、音声入出力部81、音声認識部82、音声復号部83とで構成される。音声入出力部81の音声入力はマイクであり、ユーザの声などを音声データに変換して入力する。また、音声入出力部81の音声出力はスピーカであり、ユーザに必要な音声情報等を出力する。音声認識部82は、入力された音声情報を解析し、指示コマンド等を抽出する。音声復号部83は、必要に応じて、符号化音声信号の復号処理(音声合成処理)等を行う機能を有する。 The voice processing device 8 is composed of a voice input / output unit 81, a voice recognition unit 82, and a voice decoding unit 83. The voice input of the voice input / output unit 81 is a microphone, and the user's voice or the like is converted into voice data and input. Further, the voice output of the voice input / output unit 81 is a speaker, and outputs voice information and the like necessary for the user. The voice recognition unit 82 analyzes the input voice information and extracts instruction commands and the like. The voice decoding unit 83 has a function of performing decoding processing (speech synthesis processing) of the coded voice signal and the like, if necessary.
 操作入力装置9は、HMD1に対する操作指示の入力を行う指示入力部である。操作入力装置9は、ボタンスイッチ等を並べた操作キー、等で構成される。その他の操作デバイスを更に備えても良い。通信処理装置6を利用し、有線通信または無線通信により接続された別体の携帯端末機器を用いてHMD1の操作を行っても良い。また、音声処理装置8の音声認識部82を利用して、操作指示の音声コマンドによりHMD1の操作を行なっても良い。また、映像処理装置7の撮像部71の撮影映像を解析し、ジェスチャなどの動作で、HMD1の操作を行なっても良い。 The operation input device 9 is an instruction input unit for inputting an operation instruction to the HMD1. The operation input device 9 is composed of operation keys and the like in which button switches and the like are arranged. Other operating devices may be further provided. The communication processing device 6 may be used to operate the HMD 1 by using a separate mobile terminal device connected by wired communication or wireless communication. Further, the voice recognition unit 82 of the voice processing device 8 may be used to operate the HMD 1 by a voice command of an operation instruction. Further, the HMD1 may be operated by analyzing the captured image of the imaging unit 71 of the image processing device 7 and performing an operation such as a gesture.
 なお、図2に示したHMD1の構成例は、本実施例に必須ではない構成も多数含んでいるが、これらが備えられていない構成であっても本実施例の効果を損なうことはない。また、デジタル放送受信機能や電子マネー決済機能等、図示していない構成が更に加えられていても良い。 Although the configuration example of HMD1 shown in FIG. 2 includes many configurations that are not essential to this embodiment, the effect of this embodiment is not impaired even if these configurations are not provided. Further, configurations (not shown) such as a digital broadcast reception function and an electronic money payment function may be further added.
 図3は、本実施例におけるHMDの機能ブロック構成図である。図3において、制御部11は、主に、主制御装置2と、記憶装置4のプログラム部41及びプログラム機能部43で構成され、手書き描画支援システムを構成している。 FIG. 3 is a functional block configuration diagram of the HMD in this embodiment. In FIG. 3, the control unit 11 is mainly composed of the main control device 2, the program unit 41 of the storage device 4, and the program function unit 43, and constitutes a handwriting drawing support system.
 3次元センサ情報取得部12は、センサ装置5の3次元センサ部53からの情報を取得する機能である。3次元センサ部53からの情報には、HMD1から対象物の各ポイントまでの距離情報が含まれている。 The three-dimensional sensor information acquisition unit 12 is a function of acquiring information from the three-dimensional sensor unit 53 of the sensor device 5. The information from the three-dimensional sensor unit 53 includes distance information from the HMD 1 to each point of the object.
 3次元データ処理部13は、3次元センサ部53からの情報(HMD1から対象物の各ポイントまでの距離情報)に基づいて、対象物の形状を把握する機能を有する。 The three-dimensional data processing unit 13 has a function of grasping the shape of the object based on the information from the three-dimensional sensor unit 53 (distance information from the HMD 1 to each point of the object).
 3次元データ保存部14は、3次元データ処理部13で得られた3次元データを、記憶装置4の各種データ部42に保存する機能を有する。 The three-dimensional data storage unit 14 has a function of storing the three-dimensional data obtained by the three-dimensional data processing unit 13 in various data units 42 of the storage device 4.
 撮影データ取得部15は、映像処理装置7の撮像部71で、実空間を撮影し、撮影データを取得する機能を有する。この撮影データ取得部15により、手書きした画像データ情報や、ジェスチャ動作等の情報を取得することができる。 The shooting data acquisition unit 15 is an imaging unit 71 of the video processing device 7, and has a function of shooting a real space and acquiring shooting data. The shooting data acquisition unit 15 can acquire handwritten image data information and information such as gesture operation.
 通信処理部16は、通信処理装置6のLAN通信部61や電話網通信部62で構成され、インターネット網を介して外部のネットワークサーバへアップロードしたり、インターネット網を介して外部のネットワークサーバからの各種情報をダウンロードしたりする機能を有する。お手本画像等のARオブジェクトの画像も、この通信処理部16を利用して、外部のサーバからのダウンロードや、外部のサーバへのアップロードを行う。外部サーバからお手本画像を記憶装置4の各種データ部42に、ダウンロードすることにより、各種豊富なお手本画像をユーザに提示することができる。 The communication processing unit 16 is composed of a LAN communication unit 61 and a telephone network communication unit 62 of the communication processing device 6, and can be uploaded to an external network server via the Internet network or from an external network server via the Internet network. It has a function to download various information. The image of the AR object such as the model image is also downloaded from the external server or uploaded to the external server by using the communication processing unit 16. By downloading the model image from the external server to the various data units 42 of the storage device 4, various abundant model images can be presented to the user.
 AR画像情報保存部17は、通信処理部16で入手したAR画像情報を、記憶装置4の各種データ部42に保存する機能である。 The AR image information storage unit 17 is a function of storing the AR image information obtained by the communication processing unit 16 in various data units 42 of the storage device 4.
 AR画像生成処理部18は、AR画像情報保存部17に保存されているARオブジェクト情報に基づき、ARオブジェクトを生成する機能である。AR画像生成処理部18により、お手本画像に対する拡大・縮小や回転・変形などの加工を行うことができる。 The AR image generation processing unit 18 is a function of generating an AR object based on the AR object information stored in the AR image information storage unit 17. The AR image generation processing unit 18 can perform processing such as enlargement / reduction, rotation / deformation of the model image.
 AR画像表示処理部19は、AR画像生成処理部18で生成されたARオブジェクトを、HMD1の表示画面75に表示する機能である。また、3次元データ保存部14に保存されている手書き面の形状情報に基づき、お手本画像(ARオブジェクトの虚像)を手書き面の形状に沿って表示することができる。 The AR image display processing unit 19 is a function of displaying the AR object generated by the AR image generation processing unit 18 on the display screen 75 of the HMD1. Further, the model image (virtual image of the AR object) can be displayed along the shape of the handwritten surface based on the shape information of the handwritten surface stored in the three-dimensional data storage unit 14.
 以下、本実施例の動作状況を説明する。図4は、本実施例における動作状態を説明するための模式図である。図4において、HMD1を装着したユーザ10は、HMD1の表示画面75を介して手書き面28を視認する。HMD1の表示画面75に表示したARオブジェクトは、HMD1を装着したユーザ10にとって、あたかも手書き面28に存在するARオブジェクトの虚像27として視認できる。 The operating status of this embodiment will be described below. FIG. 4 is a schematic diagram for explaining an operating state in this embodiment. In FIG. 4, the user 10 wearing the HMD1 visually recognizes the handwriting surface 28 via the display screen 75 of the HMD1. The AR object displayed on the display screen 75 of the HMD1 can be visually recognized as a virtual image 27 of the AR object existing on the handwriting surface 28 by the user 10 wearing the HMD1.
 3次元データ保存部14により保存されている手書き面28の3次元情報は、HMD1から手書き面28の手前側までの距離と、HMD1から手書き面28の奥側までの距離の違い(手前側までの距離に比べて、奥側までの距離が長い。)を情報として示している。その様子を図5の模式図を用いて説明する。 The three-dimensional information of the handwriting surface 28 stored by the three-dimensional data storage unit 14 is the difference between the distance from the HMD 1 to the front side of the handwriting surface 28 and the distance from the HMD 1 to the back side of the handwriting surface 28 (to the front side). The distance to the back side is longer than the distance of.) Is shown as information. The situation will be described with reference to the schematic diagram of FIG.
 図5は、多少デフォルメしているが、実空間と表示画面上のARオブジェクトとの対応を示した模式図である。図5に示すように、HMD1を装着したユーザ10は、HMD1の表示画面75に表示した矩形のARオブジェクト22が、手前側の辺の長さに比べて、奥側の辺の長さが短く表示され(遠近法表示)、また、手前側の辺の太さに比べて、奥側の辺の太さが細く表示される(遠近法表示)ので、あたかも手書き面28に存在するが如く、ARオブジェクトの虚像27を視認することができる。 FIG. 5 is a schematic diagram showing the correspondence between the real space and the AR object on the display screen, although it is slightly deformed. As shown in FIG. 5, in the user 10 wearing the HMD1, the rectangular AR object 22 displayed on the display screen 75 of the HMD1 has a shorter side length on the back side than the length on the front side. It is displayed (perspective display), and the thickness of the back side is displayed thinner than the thickness of the front side (perspective display), so it is as if it exists on the handwritten surface 28. The virtual image 27 of the AR object can be visually recognized.
 次に、本実施例における、制御部11により実行されるHMD1での処理手順の概要を図6のフローチャートを用いて説明する。図6において、処理が開始されると、先ず、ステップS420で、お手本画像選択処理を行い、手書きすべきお手本画像を選択する。なお、お手本画像選択処理S420の詳細は、後述する。 Next, the outline of the processing procedure in the HMD 1 executed by the control unit 11 in this embodiment will be described with reference to the flowchart of FIG. In FIG. 6, when the process is started, first, in step S420, a model image selection process is performed to select a model image to be handwritten. The details of the model image selection process S420 will be described later.
 次に、ステップS430で、選択したお手本画像に対して、変形・移動等の指示があるかどうかを判断する。 Next, in step S430, it is determined whether or not there is an instruction such as deformation / movement for the selected model image.
 S430の処理で、選択したお手本画像に対して、変形・移動等の指示がある場合は、ステップS440で、拡大・縮小・回転・変形・位置移動等の、お手本画像変形・移動処理を行い、手書きすべきお手本画像を確定する。なお、お手本画像変形・移動処理S440の詳細は、後述する。 If there is an instruction to transform / move the selected model image in the process of S430, the model image is transformed / moved in step S440 such as enlargement / reduction / rotation / transformation / position movement. Confirm the model image to be handwritten. The details of the model image transformation / movement process S440 will be described later.
 S430の処理で、選択したお手本画像に対して、変形・移動の指示が無い場合、すなわち、選択したお手本画像を変形・移動する事無く、そのまま利用できる場合は、ステップS460に移行する。 In the process of S430, if there is no instruction to transform / move the selected model image, that is, if the selected model image can be used as it is without being transformed / moved, the process proceeds to step S460.
 ステップS460の処理は、手書き面の手書き用紙に手書きする際の手書き描画を支援するお手本画像手書き処理であり、お手本画像をなぞらせることで手書き描画を支援する。なお、お手本画像手書き処理S460の詳細は、後述する。また、このお手本画像手書き処理S460を実行中においても、お手本画像の変形・移動等の指示は有効であり、その都度、最適なお手本画像を提示することができる。以上で、HMD1での処理手順の概要を終了する。 The process of step S460 is a model image handwriting process that supports handwriting drawing when handwriting on the handwriting paper on the handwriting surface, and supports handwriting drawing by tracing the model image. The details of the model image handwriting process S460 will be described later. Further, even during the execution of the model image handwriting process S460, instructions such as deformation and movement of the model image are effective, and the optimum model image can be presented each time. This completes the outline of the processing procedure in HMD1.
 次に、各処理手順の詳細について説明する。図7は、お手本画像選択処理S420の詳細を示したフローチャートである。図7において、お手本画像選択処理S420が開始されると、先ず、手書き面の形状把握処理(S422)を行う。手書き面の形状把握処理(S422)は、3次元センサ情報取得部12により、HMD1から手書き面28の各ポイントまでの距離を計測し、3次元データ処理部13により手書き面28の形状を把握する処理である。ここで得られた手書き面28の形状は、3次元データ保存部14により、記憶装置4の各種データ部42に保存する。既に、手書き面の形状を把握し、記憶装置4の各種データ部42に保存されている場合は、その保存された手書き面の形状情報を利用する。 Next, the details of each processing procedure will be described. FIG. 7 is a flowchart showing the details of the model image selection process S420. In FIG. 7, when the model image selection process S420 is started, first, the shape grasping process (S422) of the handwritten surface is performed. In the handwriting surface shape grasping process (S422), the three-dimensional sensor information acquisition unit 12 measures the distance from the HMD 1 to each point of the handwriting surface 28, and the three-dimensional data processing unit 13 grasps the shape of the handwriting surface 28. It is a process. The shape of the handwriting surface 28 obtained here is stored in various data units 42 of the storage device 4 by the three-dimensional data storage unit 14. If the shape of the handwritten surface has already been grasped and stored in various data units 42 of the storage device 4, the stored shape information of the handwritten surface is used.
 本実施例における手書き面28の形状は、矩形平面を想定している。手書き面28が矩形であるので、角の4点の位置を把握することにより、手書き面28の形状を把握することができる。また、一旦4点の位置を把握した後は、借りに1点が計測可能な範囲外に外れた場合でも、残る3点の位置から4点目の位置を推測することが可能である。なお、4点の角の位置を把握する代わりに、4辺の位置を把握することで手描き面28の形状を把握するようにしても構わない。 The shape of the handwriting surface 28 in this embodiment assumes a rectangular plane. Since the handwriting surface 28 is rectangular, the shape of the handwriting surface 28 can be grasped by grasping the positions of the four corners. Further, once the positions of the four points are grasped, even if one point is out of the measurable range for borrowing, the position of the fourth point can be estimated from the positions of the remaining three points. Instead of grasping the positions of the four corners, the shape of the hand-drawn surface 28 may be grasped by grasping the positions of the four sides.
 次に、お手本画像を提示するお手本画像提示処理(S423)を行う。図8は、手書き面28上に、第1のお手本画像31と第2のお手本画像32を提示した模式図である。簡略化のため、選択すべきお手本の数は、2個とし、更に、第1のお手本画像31は矩形、第2のお手本画像は三角形として、イラスト線画を用いている。 Next, the model image presentation process (S423) for presenting the model image is performed. FIG. 8 is a schematic view showing the first model image 31 and the second model image 32 on the handwritten surface 28. For the sake of simplicity, the number of models to be selected is two, and the first model image 31 is a rectangle and the second model image is a triangle, and illustration line drawings are used.
 本実施例では、撮影データ取得部15からの映像情報を取得し、ジェスチャ動作を解析して、提示するお手本画像を選択表示する。図8のお手本画像を提示した状態では、指もしくは手全体のジェスチャ動作を対象としている。すなわち、手全体を左もしくは右にスライドするジェスチャ動作により、提示するお手本画像の画面を入れ替えることができる。例えば、手全体を左にスライドするジェスチャ動作を行うと直前のお手本画像の画面を提示し、手全体を右にスライドするジェスチャ動作を行うと次のお手本画像の画面を提示する。 In this embodiment, the video information from the shooting data acquisition unit 15 is acquired, the gesture operation is analyzed, and the model image to be presented is selected and displayed. In the state where the model image of FIG. 8 is presented, the gesture motion of the finger or the entire hand is targeted. That is, the screen of the model image to be presented can be exchanged by the gesture operation of sliding the entire hand to the left or right. For example, when the gesture operation of sliding the entire hand to the left is performed, the screen of the immediately preceding model image is presented, and when the gesture operation of sliding the entire hand to the right is performed, the screen of the next model image is presented.
 次に、お手本画像を選択するお手本画像選択処理(S424)を行う。図9は、手書き面28に提示された第1のお手本画像31を選択した状態を示す模式図である。指39が、手書き面28に提示された第1のお手本画像31を指し示すことでお手本画像31を選択する。 Next, the model image selection process (S424) for selecting the model image is performed. FIG. 9 is a schematic view showing a state in which the first model image 31 presented on the handwriting surface 28 is selected. The model image 31 is selected by the finger 39 pointing to the first model image 31 presented on the handwriting surface 28.
 本実施例では、撮影データ取得部15と3次元センサ情報取得部から指39の映像情報と3次元の距離情報を取得し、AR画像表示処理部19により、HMD1の表示画面75に表示するARオブジェクトの画像を加工して表示している。そのため、図9の模式図では、指39の下に存在する第1のお手本画像31(ARオブジェクトの虚像)が、隠れているように視認できることを示している。 In this embodiment, AR that acquires the image information of the finger 39 and the three-dimensional distance information from the shooting data acquisition unit 15 and the three-dimensional sensor information acquisition unit and displays them on the display screen 75 of the HMD1 by the AR image display processing unit 19. The image of the object is processed and displayed. Therefore, in the schematic view of FIG. 9, it is shown that the first model image 31 (virtual image of the AR object) existing under the finger 39 can be visually recognized as if it is hidden.
 選択指示は、指39のジェスチャ動作で行う。具体的には、第1のお手本画像31上で、手描き面28の範囲内までの距離と指39が同じ距離にあることを検出し、指39がお手本画像31を指示している場合には、指39をダブルクリックするジェスチャ動作で選択指示をすることができる。 The selection instruction is performed by the gesture operation of the finger 39. Specifically, when it is detected on the first model image 31 that the distance within the range of the hand-drawn surface 28 and the finger 39 are at the same distance, and the finger 39 indicates the model image 31 , The selection can be instructed by the gesture operation of double-clicking the finger 39.
 次に、手書き面28に提示された、非選択のお手本画像を消去する非選択お手本画像消去処理(S425)を行う。ここでは、手描き面28上でお手本画像31を選択する例を示したが、必ずしも手描き面28で選択する必要は無く、HMD1の表示画面75上でお手本画像31を選択するようにしても構わない。 Next, the non-selection model image erasing process (S425) for erasing the non-selection model image presented on the handwritten surface 28 is performed. Here, an example of selecting the model image 31 on the hand-drawn surface 28 is shown, but it is not always necessary to select the model image 31 on the hand-drawn surface 28, and the model image 31 may be selected on the display screen 75 of the HMD1. ..
 図10は、第2のお手本画像32が非選択であったので、手書き面28から消去され、選択された第1のお手本画像31のみが提示されている様子を示した模式図である。以上で、お手本画像選択処理S420を終了する。 FIG. 10 is a schematic view showing a state in which only the selected first model image 31 is presented after being erased from the handwritten surface 28 because the second model image 32 was not selected. This completes the model image selection process S420.
 次に、図11は、お手本画像変形・移動処理S440の詳細を示したフローチャートである。なお、本実施例では、ジェスチャ動作による指示の解析手順を、拡大・縮小、回転・変形、位置移動、の順に行っているが、特にこの順に限定するものではない。 Next, FIG. 11 is a flowchart showing the details of the model image transformation / movement process S440. In this embodiment, the procedure for analyzing the instruction by the gesture operation is performed in the order of enlargement / reduction, rotation / deformation, and position movement, but the procedure is not particularly limited to this order.
 図11において、お手本画像変形・移動処理S440が開始されると、お手本画像の変形や移動に関するジェスチャ動作指示の解析を行う。先ず、ステップS422で、与えられたジェスチャ動作指示が、拡大もしくは縮小の指示かどうかを判断する。与えられた指示が、拡大もしくは縮小の指示の場合は、拡大・縮小処理(S443)を行い、お手本画像を拡大もしくは縮小して提示する。親指と人差し指の間を開くジェスチャ動作を認識すると、お手本画像を拡大して提示する。また、親指と人差し指の間を閉じるジェスチャ動作を認識すると、お手本画像を縮小して提示する。なお、本実施例での拡大率や縮小率は、ジェスチャ動作の大きさに依存しているが、予め設定された任意の拡大率や縮小率でもよい。 In FIG. 11, when the model image deformation / movement process S440 is started, the gesture operation instruction regarding the deformation / movement of the model image is analyzed. First, in step S422, it is determined whether or not the given gesture operation instruction is an enlargement or reduction instruction. If the given instruction is an instruction for enlargement or reduction, the enlargement / reduction process (S443) is performed to enlarge or reduce the model image and present it. When it recognizes the gesture movement that opens between the thumb and index finger, it enlarges and presents the model image. In addition, when it recognizes the gesture motion of closing between the thumb and index finger, the model image is reduced and presented. The enlargement ratio and reduction ratio in this embodiment depend on the magnitude of the gesture operation, but may be any preset enlargement ratio or reduction ratio.
 次に、与えられたジェスチャ動作指示が、回転もしくは変形の指示かどうかを判断する(処理S424)。与えられたジェスチャ動作指示が、回転もしくは変形の指示の場合は、回転・変形処理(S445)を行い、お手本画像を回転もしくは変形して提示する。 円弧を描くように指をスライドするジェスチャ動作を認識すると、指のスライド方向にお手本画像を回転して提示する。本実施例における回転角度は、指が描いた円弧に依存しているが、予め設定された任意の回転角度でもよい。また、お手本画像の外形の1点を押さえてスライドするジェスチャ動作を認識すると、指のスライド方向にお手本画像を変形して提示する。 Next, it is determined whether the given gesture operation instruction is a rotation or deformation instruction (process S424). If the given gesture operation instruction is a rotation or transformation instruction, the rotation / transformation process (S445) is performed to rotate or transform the model image and present it. When the gesture motion of sliding the finger in an arc is recognized, the model image is rotated and presented in the sliding direction of the finger. The rotation angle in this embodiment depends on the arc drawn by the finger, but may be any preset rotation angle. In addition, when the gesture motion of pressing and sliding one point on the outer shape of the model image is recognized, the model image is deformed and presented in the sliding direction of the finger.
 次に、与えられた指示が、位置移動かどうかを判断する(処理S446)。与えられた指示が位置移動の場合は、位置移動処理(S447)を行い、お手本画像の位置を移動する。お手本画像の内側を押さえてスライドするジェスチャ動作を認識すると、指のスライド方向にお手本画像を移動して提示する。本実施例における移動距離は、ジェスチャ動作の大きさに依存しているが、予め設定された任意の移動距離でもよい。以上で、お手本画像変形・移動処理S440を終了する。 Next, it is determined whether or not the given instruction is a position movement (process S446). If the given instruction is a position move, the position move process (S447) is performed to move the position of the model image. When the gesture motion of pressing and sliding the inside of the model image is recognized, the model image is moved and presented in the sliding direction of the finger. The moving distance in this embodiment depends on the magnitude of the gesture movement, but may be any preset moving distance. This completes the model image transformation / movement process S440.
 次に、図12は、お手本画像手書き処理S460の詳細を示したフローチャートである。図12において、お手本画像手書き処理S460が開始されると、先ず、手書き用紙の形状把握処理(S462)を行う。手書き用紙の形状把握処理(S462)は、3次元センサ情報取得部12により、HMD1から手書き用紙の各ポイントまでの距離を計測し、3次元データ処理部13により手書き用紙の形状を把握する処理である。ここで得られた手書き用紙の形状を、3次元データ保存部14により、記憶装置4の各種データ部42に保存する。既に、手書き用紙の形状を把握し、記憶装置4の各種データ部42に保存されている場合は、その保存された手書き用紙の形状情報を利用する。 Next, FIG. 12 is a flowchart showing the details of the model image handwriting process S460. In FIG. 12, when the model image handwriting process S460 is started, first, the shape grasping process (S462) of the handwriting paper is performed. The handwriting paper shape grasping process (S462) is a process in which the three-dimensional sensor information acquisition unit 12 measures the distance from the HMD 1 to each point of the handwriting paper, and the three-dimensional data processing unit 13 grasps the shape of the handwriting paper. is there. The shape of the handwritten paper obtained here is stored in various data units 42 of the storage device 4 by the three-dimensional data storage unit 14. If the shape of the handwriting paper has already been grasped and stored in various data units 42 of the storage device 4, the shape information of the stored handwriting paper is used.
 本実施例における手書き用紙の形状は、矩形平面を想定している。矩形平面であるので、手書き用紙の角の4点の位置を把握することにより、手書き用紙の形状を把握することができる。勿論、一度、手描き用紙の全体の形状と用紙上の各ポイントまでの距離を計測できれば、手描き用紙の形状は矩形平面に限るものではない。 The shape of the handwriting paper in this embodiment is assumed to be a rectangular plane. Since it is a rectangular plane, the shape of the handwriting paper can be grasped by grasping the positions of the four corners of the handwriting paper. Of course, once the overall shape of the hand-drawn paper and the distance to each point on the paper can be measured, the shape of the hand-drawn paper is not limited to a rectangular plane.
 次に、選択したお手本画像を提示する選択お手本画像提示処理(S463)を行う。図13は、手書き面28の上に、手書き用紙25を乗せ置き、ARオブジェクトの虚像であるウサギのお手本画像35を視認させている模式図である。ここで、ウサギのお手本画像35が、手書き用紙25から逸脱している、あるいは、バランスが不適当であると判断された場合は、再度、お手本画像変形・移動処理(S440)を実行し、お手本画像35の最適化を行う。 Next, the selected model image presentation process (S463) for presenting the selected model image is performed. FIG. 13 is a schematic view in which the handwriting paper 25 is placed on the handwriting surface 28, and the model image 35 of the rabbit, which is a virtual image of the AR object, is visually recognized. Here, if it is determined that the rabbit model image 35 deviates from the handwriting paper 25 or the balance is inappropriate, the model image transformation / movement process (S440) is executed again to perform the model image. Image 35 is optimized.
 次に、お手本画像をなぞって手書きするお手本画像手書き処理(S464)を行う。図14は、ウサギのお手本画像35の外形(顔)を、ペン38を用いて手書きした状態を示した模式図である。本実施例では、撮影データ取得部15から映像情報を取得し、どこの部分を、どこまで手書きしたかを解析している。図14の模式図では、ウサギのお手本画像35をなぞって手書きした手書き部分36(ウサギの顔部分)を、若干太めの線として、表現している。図14の模式図においては、手書き済み部分36と重複するウサギのお手本画像35の部分は、HMD1の表示画面75のARオブジェクトの段階で、消去して表示している。その結果、手書き済み部分36と重複するウサギのお手本画像35の該当部分は、手書き用紙25上から消去されて提示される。 Next, the model image handwriting process (S464) is performed by tracing the model image and handwriting. FIG. 14 is a schematic view showing a state in which the outer shape (face) of the model image 35 of the rabbit is handwritten using the pen 38. In this embodiment, video information is acquired from the shooting data acquisition unit 15, and what part and how much is handwritten is analyzed. In the schematic diagram of FIG. 14, the handwritten portion 36 (the face portion of the rabbit) handwritten by tracing the model image 35 of the rabbit is expressed as a slightly thicker line. In the schematic diagram of FIG. 14, the portion of the rabbit model image 35 that overlaps with the handwritten portion 36 is erased and displayed at the stage of the AR object on the display screen 75 of the HMD1. As a result, the corresponding portion of the rabbit model image 35 that overlaps with the handwritten portion 36 is erased from the handwritten paper 25 and presented.
 図15は、ウサギのお手本画像35を、全てなぞって手書きが完了した状態の手書き済み画像37を示した模式図である。図15の模式図では、ウサギのお手本画像35は、全て手書き済み(手書き済み画像37)なので、HMD1の表示画面75にはARオブジェクトを表示しない。従って、ウサギのお手本画像35は、手書き用紙25上に提示していない状態である。図15の模式図では、ウサギのお手本画像35と、手書き済み画像37とが、完全に一致している状態を示している。以上で、お手本画像手書き処理S460を終了する。 FIG. 15 is a schematic view showing a handwritten image 37 in a state where handwriting is completed by tracing all the model images 35 of the rabbit. In the schematic view of FIG. 15, since the rabbit model image 35 is all handwritten (handwritten image 37), the AR object is not displayed on the display screen 75 of the HMD1. Therefore, the rabbit model image 35 is not presented on the handwriting paper 25. In the schematic view of FIG. 15, the model image 35 of the rabbit and the handwritten image 37 are shown to be in perfect agreement with each other. This completes the model image handwriting process S460.
 なお、本実施例の図面では、黒色のみで手書きしている模式図であるが、当然、あらゆる色を使用することができ、あらゆる色で、ウサギを着色できることは言うまでも無い。 また、お手本画像の線の太さを手書き画像の線の太さに合わせることもできる。また、お手本画像の線に、点線・破線・1点差線等で中心線(線の幅の中心を表す線)を加え、お手本画像と異なる色(白色を含む)で、提示することもできる。 Although the drawings of this embodiment are schematic drawings that are handwritten only in black, it goes without saying that any color can be used and the rabbit can be colored with any color. It is also possible to match the line thickness of the model image with the line thickness of the handwritten image. It is also possible to add a center line (a line representing the center of the width of the line) to the line of the model image with a dotted line, a broken line, a one-dot difference line, etc., and present it in a color (including white) different from that of the model image.
 また、本実施例では、お手本画像35と、手書き済み画像37とが、完全に一致していたが、完全には一致しない場合もある。図16の模式図は、ウサギのお手本画像35と、手書き済み画像37とが、完全には一致していない状態を示した模式図である。図16において、手書き済み画像37のウサギの顔部分が、ウサギのお手本画像35と一致していない状態であり、その状態を明確化するため、本模式図では、一致していないウサギのお手本画像35の部分33を、破線で示している。これで、手書き終了後のユーザは、どこの部分が、お手本画像と異なっているかを、明確に認識することができる。 Further, in this embodiment, the model image 35 and the handwritten image 37 are completely matched, but may not be completely matched. The schematic diagram of FIG. 16 is a schematic diagram showing a state in which the model image 35 of the rabbit and the handwritten image 37 do not completely match. In FIG. 16, the rabbit face portion of the handwritten image 37 does not match the rabbit model image 35, and in order to clarify the state, the rabbit model image that does not match is shown in this schematic diagram. Part 33 of 35 is shown by a broken line. This allows the user after handwriting to clearly recognize which part is different from the model image.
 図16の模式図では、お手本画像35と手書き済み画像37を重ねて視認させていたが、お手本画像35と手書き済み画像37を並べて視認させることもできる。また、手書き済み画像37とお手本画像35を識別する手法は、上記の破線表示以外にも種々の手法が存在する。例えば、お手本画像35の色を、手書き済み画像37と異なる色としたり、お手本画像35の線の太さやスタイルを変えたり、お手本画像35をブリンクしたりするなどの手法により、手書き済み画像37とお手本画像35との違いを明確にすることができる。 In the schematic view of FIG. 16, the model image 35 and the handwritten image 37 are superimposed and visually recognized, but the model image 35 and the handwritten image 37 can be visually recognized side by side. In addition to the above-mentioned broken line display, there are various methods for distinguishing the handwritten image 37 from the model image 35. For example, the color of the model image 35 may be different from that of the handwritten image 37, the line thickness and style of the model image 35 may be changed, or the model image 35 may be blinked to obtain the handwritten image 37. The difference from the model image 35 can be clarified.
 次に、本実施例で使用したジェスチャ動作と、そのジェスチャ動作に対するお手本画像35の処理内容について説明する。図17は、本実施例におけるジェスチャ動作とその動作に対するお手本画像の処理内容を示したジェスチャ動作対応テーブルである。図17において、ジェスチャ動作対応テーブル500は、本実施例で使用するジェスチャ動作リスト501と、そのジェスチャ動作に対するお手本画像の処理内容を示す処理リスト502で構成している。 Next, the gesture operation used in this embodiment and the processing content of the model image 35 for the gesture operation will be described. FIG. 17 is a gesture motion correspondence table showing the gesture motion in this embodiment and the processing content of the model image for the gesture motion. In FIG. 17, the gesture operation correspondence table 500 is composed of a gesture operation list 501 used in this embodiment and a processing list 502 showing the processing contents of a model image for the gesture operation.
 例えば、ジェスチャ動作として、お手本画像が提示された手書き面28の上に、開いた手を乗せ、その手を右方向に移動させるジェスチャ動作503を実行すると、そのジェスチャ動作503に対するお手本画像に関する処理内容504は、次のお手本画像を示す画面を提示することを意味している。ジェスチャ動作対応テーブル500に記載したその他のジェスチャ動作および、そのジェスチャ動作に対するお手本画像の処理内容については、ここでの説明を省略する。なお、提示した各ジェスチャ動作は、あくまで1例であり、当然、他のジェスチャ動作を有効とすることもできる。 For example, as a gesture operation, when a gesture operation 503 in which an open hand is placed on a handwritten surface 28 on which a model image is presented and the hand is moved to the right is executed, the processing content related to the model image for the gesture operation 503 is executed. 504 means to present a screen showing the next model image. The description of other gesture operations described in the gesture operation correspondence table 500 and the processing contents of the model image for the gesture operation will be omitted here. It should be noted that each of the presented gesture actions is only an example, and of course, other gesture actions can be enabled.
 このように、本実施例では、HMDに内蔵された3次元センサからの情報により、手書き画像を作成する紙等の手書き面の形状を把握し、その把握した形状に沿って、あたかも実空間に存在するが如く手書き面に、コンピュータによる拡張現実のARオブジェクト(お手本画像)を提示し、お手本画像と手書き画像との重畳により、手書き画像の作成を支援している。また、HMDの制御処理により、お手本画像の拡大・縮小や回転・変形などの加工を行う。 In this way, in this embodiment, the shape of the handwritten surface such as paper for creating a handwritten image is grasped by the information from the three-dimensional sensor built in the HMD, and the shape of the handwritten surface is as if it were in the real space. It presents an augmented reality AR object (model image) by a computer on the handwritten surface as if it exists, and supports the creation of a handwritten image by superimposing the model image and the handwritten image. In addition, the HMD control process performs processing such as enlargement / reduction and rotation / deformation of the model image.
 以上のように、本実施例によれば、お手本画像を最適なサイズで、最適な位置に配置することが可能で、お手本画像に準拠した手書き画像を容易に作成することが可能となる手書き描画支援システムを提供できる。 As described above, according to the present embodiment, the model image can be arranged in the optimum size and the optimum position, and the handwritten image conforming to the model image can be easily created. A support system can be provided.
 本実施例は、ジェスチャ動作以外の方法で、お手本画像の処理内容を指示する手法について説明する。なお、本実施例の基本的なハードウェア構成は実施例1と同様であり、その説明は省略する。 This embodiment describes a method of instructing the processing content of the model image by a method other than the gesture operation. The basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
 本実施例では、ジェスチャ動作の代わりに、音声コマンドを用いて、お手本画像の処理内容を指示する。音声コマンドは、音声処理装置8の音声入出力部81を介して入力されたユーザの音声を、音声認識部82にて解析し、音声コマンドとして抽出している。 In this embodiment, instead of the gesture operation, a voice command is used to instruct the processing content of the model image. The voice command is extracted as a voice command by analyzing the user's voice input via the voice input / output unit 81 of the voice processing device 8 by the voice recognition unit 82.
 ここで、使用する音声コマンドについて説明する。図18は、本実施例で使用する音声コマンドの対応テーブルである。図18において、音声コマンド対応テーブル520は、音声コマンドリスト521と、各音声コマンドの意味を示す意味リスト522と、各音声コマンドに対応したお手本画像に対する処理内容を示す処理リスト523から構成されている。 Here, the voice command to be used will be explained. FIG. 18 is a correspondence table of voice commands used in this embodiment. In FIG. 18, the voice command correspondence table 520 is composed of a voice command list 521, a meaning list 522 showing the meaning of each voice command, and a processing list 523 showing the processing contents for the model image corresponding to each voice command. ..
 例えば、音声コマンドとして、「マエガメン」524と発声すると、その音声コマンド(意味は前画面)に対するお手本画像に関する処理内容525は、前のお手本画像の画面を提示することを意味している。なお、音声コマンド対応テーブル520に記載したその他の音声コマンドおよび、その音声コマンドに対するお手本画像の処理内容については、ここでの説明を省略する。また、各音声コマンドは、あくまで1例であり、当然、他の音声コマンドを有効とすることもできる。 For example, when "maegamen" 524 is uttered as a voice command, the processing content 525 regarding the model image for the voice command (meaning the front screen) means that the screen of the previous model image is presented. The other voice commands listed in the voice command correspondence table 520 and the processing contents of the model image for the voice commands will not be described here. Further, each voice command is only an example, and of course, other voice commands can be enabled.
 また、音声処理装置8の音声復号部83の音声合成機能を用いて、音声コマンドに対応した音声応答を行なうこともできる。その結果、例えば、入力された音声コマンドと同じ音声(鸚鵡返し)で応答することや、入力された音声コマンドの確認、例えば、「XXデスネ?」に対してユーザの返答が「ハイ」であれば「XXヲジッコウシマス」、「XXデスネ?」に対してユーザの返答が「イイエ」であれば「XXヲトリケシマス」、などの音声応答などが可能となる。なお、図18の音声コマンド対応テーブル520では、音声応答に関する音声コマンド(「ハイ」、「イイエ」)は、掲載していないが、対応していることは言うまでも無い。 It is also possible to perform a voice response corresponding to a voice command by using the voice synthesis function of the voice decoding unit 83 of the voice processing device 8. As a result, for example, if the user responds with the same voice (echolalia) as the input voice command, or confirms the input voice command, for example, if the user's response to "XX Desune?" Is "high". If the user's response to "XX Wojikkoushimasu" and "XX Desune?" Is "Yeah", a voice response such as "XX Wotrikeshimasu" is possible. In the voice command correspondence table 520 of FIG. 18, voice commands (“high”, “yes”) related to voice response are not posted, but it goes without saying that they are supported.
 このように、本実施例では、お手本画像の処理内容を、音声コマンドを用いて指示できるので、ジェスチャ動作をする事無く、お手本画像の選択を選択することができる。なお、お手本画像の処理内容を指示する手法には、その他の手法もあることは言うまでも無い。例えば、HMD1の横(眼鏡型のつるの部分)にタッチセンサを搭載し、上下左右のスライド操作やクリック操作を行うことにより、実現することもできる。 As described above, in this embodiment, since the processing content of the model image can be instructed by using a voice command, it is possible to select the selection of the model image without performing the gesture operation. Needless to say, there are other methods for instructing the processing content of the model image. For example, it can be realized by mounting a touch sensor on the side of the HMD1 (glasses-shaped vine portion) and performing a slide operation or a click operation up, down, left or right.
 実施例1では、簡略化のため、お手本画像は、矩形や三角形などの図形や、ウサギなどのイラスト線画を用いているが、その他の画像をお手本画像とすることもができる。本実施例では、実施例1、2で説明した手書き支援システムを、習字に適用した例について説明する。なお、本実施例の基本的なハードウェア構成は実施例1と同様であり、その説明は省略する。 In Example 1, for simplification, the model image uses figures such as rectangles and triangles and illustration line drawings such as rabbits, but other images can also be used as model images. In this embodiment, an example in which the handwriting support system described in Examples 1 and 2 is applied to penmanship will be described. The basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
 図19は、習字の基本と言われ、永字八法としてよく知られている文字「永」を、習字のお手本とした場合について、その手順を示した模式図である。図19は、5画の漢字である文字「永」の書き順を習得する手順を示している。 FIG. 19 is a schematic diagram showing the procedure when the character "Ei", which is said to be the basis of penmanship and is well known as the Eight Principles of Eight, is used as a model for penmanship. FIG. 19 shows a procedure for learning the stroke order of the five-stroke kanji character “Ei”.
 図19(a)は、お手本画像として、文字「永」の第1画(点)を、白抜き表示している。ユーザは、白抜き表示されているお手本画像の第1画(点)をなぞって、毛筆にて手書きする。 In FIG. 19 (a), the first stroke (dot) of the character "eternal" is displayed in white as a model image. The user traces the first image (dot) of the model image displayed in white and handwrites it with a brush.
 図19(b)は、ユーザが第1画(点)の手書きを終了後、お手本画像として、文字「永」の第2画(横画、縦画、はね)を、白抜き表示している状態を示している。 In FIG. 19B, after the user finishes handwriting the first stroke (dot), the second stroke (horizontal stroke, vertical stroke, splash) of the character "eternal" is displayed in white as a model image. Indicates the state of being.
 図19(c)のお手本画像は第3画(右上がりの横画、左はらい)を、図19(d)のお手本画像は第4画(短い左はらい)を、図19(e)のお手本画像は第5画(右はらい)を、それぞれ白抜き表示している。ユーザは、白抜き表示されているお手本画像の提示順序に従って、毛筆にて手書きを行う。 The model image of FIG. 19 (c) is the third image (horizontal image rising to the right, left side), the model image of FIG. 19 (d) is the fourth image (short left image), and the model image of FIG. 19 (e). In the image, the fifth image (right side) is shown in white. The user performs handwriting with a brush according to the presentation order of the model images displayed in white.
 図19(f)は、文字「永」の手書きが終了し、手書き済み文字「永」606のみが書かれた状態を示している。 FIG. 19 (f) shows a state in which the handwriting of the character "Ei" is completed and only the handwritten character "Ei" 606 is written.
 本実施例により、漢字の書き順を視覚的に容易に習得することができる。なお、白抜き表示に代えて墨の濃さをやや薄めにしてもよいし、色を変えてもよい。 By this embodiment, the stroke order of kanji can be easily learned visually. Instead of the white display, the black ink may be slightly lighter or the color may be changed.
 なお、本実施例では、お手本画像を1画毎に順次分割して提示する、書き順の習得に最適な手法について説明したが、書き順以外でも、始筆から終筆に至る過程での力の入れ具合や、文字のバランスなども、お手本画像との違いを、視覚的に明示することができる。 In this embodiment, the optimum method for learning the stroke order, in which the model image is sequentially divided and presented for each stroke, has been described, but other than the stroke order, the power in the process from the first stroke to the last stroke is explained. It is possible to visually clarify the difference from the model image in terms of the degree of insertion and the balance of characters.
 また、本実施例では、お手本画像を1画毎に順次表示していたが、お手本画像全体を表示して、お手本画像とすることもできる。また、本実施例では、毛筆を用いたが、ペンなど他の筆記具を用いてもよいことは言うまでも無い。 Further, in this embodiment, the model image is sequentially displayed for each screen, but the entire model image can be displayed and used as the model image. Further, in this embodiment, a brush is used, but it goes without saying that another writing tool such as a pen may be used.
 また、本実施例の応用として、お手本画像を写真とすることもできる。写真に写りこんでいる被写体(建物、人物、物等)の外形をなぞることで、前述のお手本画像と同様な処理を行うことができ、本発明の手書き描画支援システムを実現することができる。 Also, as an application of this embodiment, a model image can be used as a photograph. By tracing the outer shape of the subject (building, person, object, etc.) reflected in the photograph, the same processing as the above-mentioned model image can be performed, and the handwriting drawing support system of the present invention can be realized.
 前述の実施例では、書きかけ手書き用紙25は、移動しない前提であったが、本実施例では、書きかけ手書き用紙25が移動した場合の処理について説明する。なお、本実施例の基本的なハードウェア構成は実施例1と同様であり、その説明は省略する。 In the above-described embodiment, it was assumed that the writing handwriting paper 25 does not move, but in this embodiment, the processing when the writing handwriting paper 25 moves will be described. The basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
 本実施例における書きかけ手書き用紙の状態とは、例えば、図14の模式図の状態(一部手書き済み状態)を意味する。 The state of the handwritten paper for writing in this embodiment means, for example, the state of the schematic diagram of FIG. 14 (partially handwritten state).
 本実施例では、センサ装置5の3次元センサ部53が手書き用紙25の移動(回転を含む)を検出した場合、ウサギのお手本画像35を、手書き用紙25の移動方向(回転の場合は回転角)及び移動距離に合わせて、回転を含む移動を行なう。その結果、書きかけ手書き済み部分36と重複していたウサギのお手本画像35の該当部分は、書きかけ手書き済み部分36と重複する位置まで移動することになり、手書き用紙25上で、書きかけ手書き済み部分36と、ウサギのお手本画像35との相対位置が、一致する状態を実現している。 In this embodiment, when the three-dimensional sensor unit 53 of the sensor device 5 detects the movement (including rotation) of the handwriting paper 25, the model image 35 of the rabbit is displayed in the movement direction (rotation angle in the case of rotation) of the handwriting paper 25. ) And the movement including the rotation according to the movement distance. As a result, the corresponding part of the rabbit model image 35 that overlaps with the handwritten part 36 is moved to a position that overlaps with the handwritten part 36, and is handwritten on the handwriting paper 25. The relative positions of the finished portion 36 and the rabbit model image 35 are in agreement with each other.
 なお、ユーザの姿勢が変わり、手書き用紙25とHMD1との距離や角度が変化した場合、お手本画像35がそのままでは、手書き用紙25上で遠近法の整合がとれなくなる問題がある。 If the user's posture changes and the distance or angle between the handwriting paper 25 and the HMD1 changes, there is a problem that the perspective cannot be matched on the handwriting paper 25 if the model image 35 remains as it is.
 このような場合についても、下記手法を適用する。すなわち、センサ装置5の3次元センサ部53が、手書き用紙25とHMD1との距離や角度が変化したことを検出した場合、再度、センサ装置5の3次元センサ部53により、手書き用紙25の形状把握を行うことにより、お手本画像35が、あたかも手書き用紙25上に存在する如く、お手本画像35を補正して提示して実現する。 The following method is applied to such cases as well. That is, when the three-dimensional sensor unit 53 of the sensor device 5 detects that the distance or angle between the handwriting paper 25 and the HMD1 has changed, the three-dimensional sensor unit 53 of the sensor device 5 again determines the shape of the handwriting paper 25. By grasping the model image 35, the model image 35 is corrected and presented as if it exists on the handwriting paper 25.
 また、手書き用紙25を手書き面から垂直方向に傾けた場合でも、手書き用紙25とHMD1との距離や角度が変化した場合と同様であり、お手本画像35が、あたかも手書き用紙25上に存在する如く、お手本画像35を補正し、手書き面から垂直方向に傾けたお手本画像35を提示して実現する。 Further, even when the handwriting paper 25 is tilted vertically from the handwriting surface, it is the same as when the distance or angle between the handwriting paper 25 and the HMD1 changes, as if the model image 35 exists on the handwriting paper 25. , The model image 35 is corrected, and the model image 35 tilted in the vertical direction from the handwriting surface is presented and realized.
 以上のように、本実施例によれば、書きかけ途中で手書き用紙が移動した場合でも、手書き用紙とHMDとの距離や角度が変化した場合でも、お手本画像をあたかも手書き用紙上に存在する如く表示することができるので、ユーザは違和感なく手書き描画支援システムを使用することができる。 As described above, according to the present embodiment, even if the handwriting paper is moved during writing or the distance or angle between the handwriting paper and the HMD is changed, the model image is as if it exists on the handwriting paper. Since it can be displayed, the user can use the handwriting drawing support system without any discomfort.
 前述の実施例では、HMD1の表示画面75にお手本画像のARオブジェクトを表示することで、あたかも手書き面28にお手本画像が存在するが如く、手書き面28上でお手本画像をARオブジェクトの虚像として視認することができる。 In the above-described embodiment, by displaying the AR object of the model image on the display screen 75 of the HMD 1, the model image is visually recognized as a virtual image of the AR object on the handwritten surface 28 as if the model image exists on the handwritten surface 28. can do.
 ここで、本実施例では、HMD1を装着したユーザが手書き面及び手書き用紙から顔を上げた場合の処理について説明する。なお、本実施例の基本的なハードウェア構成は実施例1と同様であり、その説明は省略する。 Here, in this embodiment, the processing when the user wearing the HMD1 raises his / her face from the handwritten surface and the handwritten paper will be described. The basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
 図20は、本実施例における、HMD1の表示画面75での表示を示している。図20において、表示画面75に、お手本画像35と手書き済み画像37とを並べて表示している。これは、図12のお手本画像手書き処理S464の中で、HMD1を装着したユーザの視線先をセンサ装置5のジャイロセンサ部55や図示しない視線センサ等により検出して、ユーザが手書き面及び手書き用紙から顔を上げた場合をトリガーとして、手書き用紙上の手書き済み画像を、撮像部71で撮像し、記憶装置4内の各種データ部42に取り込み、その撮像画像を表示画面75に表示する。なお、表示する手書き済み画像はもちろん手書き途中の画像であってもよい。 FIG. 20 shows the display on the display screen 75 of the HMD1 in this embodiment. In FIG. 20, the model image 35 and the handwritten image 37 are displayed side by side on the display screen 75. In the model image handwriting process S464 of FIG. 12, the line-of-sight destination of the user wearing the HMD 1 is detected by the gyro sensor unit 55 of the sensor device 5, a line-of-sight sensor (not shown), or the like, and the user can use the handwriting surface and handwriting paper. The handwritten image on the handwriting paper is imaged by the image pickup unit 71, taken into various data units 42 in the storage device 4, and the captured image is displayed on the display screen 75, triggered by the case where the face is raised. Of course, the handwritten image to be displayed may be an image in the middle of handwriting.
 以上のように、本実施例によれば、ユーザが手書き面及び手書き用紙を常に注視していなくても、お手本画像と手書き済み画像を比較することができる。 As described above, according to this embodiment, the model image and the handwritten image can be compared even if the user does not always pay attention to the handwritten surface and the handwritten paper.
 前述の実施例では、手書き面の形状と手書き用紙の形状を矩形平面として説明したが、矩形平面以外の非平面の形状も有り得る。そこで、本実施例では、手書き面及び手書き用紙が非平面の場合について説明する。なお、本実施例の基本的なハードウェア構成は実施例1と同様であり、その説明は省略する。 In the above-described embodiment, the shape of the handwriting surface and the shape of the handwriting paper are described as a rectangular plane, but a non-planar shape other than the rectangular plane is also possible. Therefore, in this embodiment, a case where the handwriting surface and the handwriting paper are non-planar will be described. The basic hardware configuration of this embodiment is the same as that of the first embodiment, and the description thereof will be omitted.
 図21は、本実施例における、手書き面621が非平面である円柱形状で、その円柱上に手書き用紙622を配置した模式図である。手書き用紙622は、円柱上の表面に沿った曲面(円弧)状に湾曲している。 FIG. 21 is a schematic view of the present embodiment in which the handwriting surface 621 has a non-planar cylindrical shape, and the handwriting paper 622 is arranged on the cylinder. The handwriting paper 622 is curved in a curved surface (arc) shape along the surface of the cylinder.
 本実施例では、センサ装置5の3次元センサ部53が、手書き面621が円柱状の曲面であることを把握(図7の処理S422を参照)し、更に、手書き用紙622が曲面状に湾曲していることを把握(図12の処理S462を参照)している。 In this embodiment, the three-dimensional sensor unit 53 of the sensor device 5 grasps that the handwriting surface 621 is a cylindrical curved surface (see processing S422 in FIG. 7), and further, the handwriting paper 622 is curved in a curved surface shape. It is understood that this is done (see process S462 in FIG. 12).
 図21では図示していないが、お手本画像(ARオブジェクトの虚像)は、この手書き用紙形状把握処理(図12の処理S462を参照)で得られた手書き用紙形状に基づき、手書き用紙622の曲面に沿うように湾曲して提示する。 Although not shown in FIG. 21, the model image (virtual image of the AR object) is formed on the curved surface of the handwriting paper 622 based on the handwriting paper shape obtained by this handwriting paper shape grasping process (see processing S462 in FIG. 12). Present by curving along.
 本実施例では、手書き用紙622を用いたが、円柱の手書き面621にペイント等で直接手書きできることは言うまでも無い。また、皿やコップなど非平面の表面に、お手本画像を、非平面の形状に沿って提示することができるので、皿やコップなど非平面の表面に提示されたお手本画像になぞって手書きすることができる。 In this embodiment, the handwriting paper 622 was used, but it goes without saying that the handwriting surface 621 of the cylinder can be directly handwritten with paint or the like. In addition, since the model image can be presented on a non-planar surface such as a plate or a cup along the shape of the non-plane, handwriting is performed by tracing the model image presented on the non-planar surface such as a plate or a cup. Can be done.
 以上、本発明の実施例について説明したが、本発明の技術を実現する構成は本実施例に限られるものではなく、様々な変形例が考えられる。例えば、ある実施例の構成の一部を他の実施例の構成と置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。これらは全て本発明の範疇に属するものである。また、文中や図中に現れる数値やメッセージ等もあくまでも一例であり、異なるものを用いても本発明の効果を損なうことはない。 Although the examples of the present invention have been described above, the configuration for realizing the technique of the present invention is not limited to the present examples, and various modifications can be considered. For example, it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of one embodiment. All of these belong to the category of the present invention. In addition, numerical values and messages appearing in sentences and figures are merely examples, and the effects of the present invention may not be impaired even if different ones are used.
 また、前述した本発明の機能等は、それらの一部または全部を、例えば集積回路で設計する等によりハードウェアで実現しても良い。また、実施例で説明したソフトウェア処理とハードウェアを併用しても良い。 Further, the above-mentioned functions and the like of the present invention may be realized by hardware by designing a part or all of them by, for example, an integrated circuit. Further, the software processing described in the examples and the hardware may be used together.
 また、本発明は、メガネ型のHMDに限らず、眼球前に表示装置を有する情報処理装置であればよく、例えば、ゴーグル型やコンタクトレンズ型でもよい。 Further, the present invention is not limited to the glasses-type HMD, and may be any information processing device having a display device in front of the eyeball, and may be, for example, a goggle type or a contact lens type.
1:HMD、2:主制御装置、3:システムバス、4:記憶装置、5:センサ装置、6:通信処理装置、7:映像処理装置、8:音声処理装置、9:操作入力装置、11:制御部、12:3次元センサ情報取得部、13:3次元データ処理部、18:AR画像生成処理部、19:AR画像表示処理部、22:ARオブジェクト、25:手書き用紙、28:手書き面、31,32,35:お手本画像、37:手書き済み画像、42:各種データ部、53:3次元センサ部、81:音声入出力部、82:音声認識部。 1: HMD 2: Main control device 3: System bus 4: Storage device 5: Sensor device, 6: Communication processing device, 7: Video processing device, 8: Voice processing device, 9: Operation input device, 11 : Control unit, 12: 3D sensor information acquisition unit, 13: 3D data processing unit, 18: AR image generation processing unit, 19: AR image display processing unit, 22: AR object, 25: Handwritten paper, 28: Handwritten Surface, 31, 32, 35: Model image, 37: Handwritten image, 42: Various data units, 53: Three-dimensional sensor unit, 81: Audio input / output unit, 82: Audio recognition unit.

Claims (10)

  1.  制御部により表示を制御するヘッドマウントディスプレイであって、
     3次元センサを設け、
     前記制御部は、前記3次元センサからの情報に基づいて対象物の形状を把握し、該対象物の形状に沿ってARオブジェクトを表示部に表示することを特徴とするヘッドマウントディスプレイ。
    A head-mounted display whose display is controlled by a control unit.
    With a 3D sensor
    The head-mounted display is characterized in that the control unit grasps the shape of an object based on information from the three-dimensional sensor and displays an AR object on the display unit along the shape of the object.
  2.  請求項1記載のヘッドマウントディスプレイであって、
     前記制御部は、前記ARオブジェクトを拡大、縮小、回転、または変形の加工を行い、前記表示部に表示することを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    The control unit is a head-mounted display characterized in that the AR object is enlarged, reduced, rotated, or deformed and displayed on the display unit.
  3.  請求項1記載のヘッドマウントディスプレイであって、
     前記制御部は、前記ARオブジェクトの候補が複数ある場合には、候補のARオブジェクトを順次、前記表示部に表示することを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    The head-mounted display is characterized in that, when there are a plurality of candidates for the AR object, the control unit sequentially displays the candidate AR objects on the display unit.
  4.  請求項1記載のヘッドマウントディスプレイであって、
     撮像部を有し、
     前記制御部は、前記撮像部で取得した撮影データを解析してジェスチャ動作の情報を取得し、該ジェスチャ動作により前記表示の制御指示を行うことを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    Has an imaging unit
    The head-mounted display is characterized in that the control unit analyzes the shooting data acquired by the imaging unit to acquire information on the gesture operation, and gives a control instruction for the display by the gesture operation.
  5.  請求項1記載のヘッドマウントディスプレイであって、
     音声入部を有し、
     前記制御部は、前記音声入部で入力されたユーザの音声を音声認識にて解析し音声コマンドとして抽出し、該音声コマンドにより前記表示の制御指示を行うことを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    Has a voice entry,
    The head-mounted display is characterized in that the control unit analyzes a user's voice input by the voice input unit by voice recognition, extracts it as a voice command, and gives a control instruction for the display by the voice command.
  6.  請求項1記載のヘッドマウントディスプレイであって、
     前記対象物と前記ヘッドマウントディスプレイの相対的な位置が変化した場合に、前記制御部は、変化した後の前記3次元センサからの情報に基づいて対象物の形状を把握し、該対象物の形状に沿ってARオブジェクトを補正して表示部に表示することを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    When the relative positions of the object and the head-mounted display change, the control unit grasps the shape of the object based on the information from the three-dimensional sensor after the change, and the control unit grasps the shape of the object. A head-mounted display characterized by correcting an AR object according to its shape and displaying it on the display unit.
  7.  請求項1記載のヘッドマウントディスプレイであって、
     前記ARオブジェクトはお手本画像であり、前記対象物は手書き面であって、
    撮像部と、記憶装置とを有し、
     前記制御部は、前記撮像部で撮影した画像情報、もしくは前記3次元センサからの情報により、前記ヘッドマウントディスプレイの正面の方向が前記手書き面から所定の方向に移動したことを検出した場合に、前記手書き面上に配置された手書き用紙上の手書き済み画像を前記撮像部で撮像し、前記記憶装置に取り込み、該取り込んだ手書き済み画像を、前記表示部に前記お手本画像と共に表示することを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    The AR object is a model image, and the object is a handwritten surface.
    It has an imaging unit and a storage device.
    When the control unit detects that the front direction of the head mount display has moved from the handwriting surface to a predetermined direction based on the image information captured by the image pickup unit or the information from the three-dimensional sensor, the control unit A feature is that a handwritten image on a handwriting sheet arranged on the handwriting surface is captured by the imaging unit, captured in the storage device, and the captured handwritten image is displayed on the display unit together with the model image. Head mount display.
  8.  請求項1記載のヘッドマウントディスプレイであって、
     前記対象物の形状が非平面である場合にも、前記制御部は、前記3次元センサからの情報に基づいて対象物の形状を把握し、該対象物の形状に沿ってARオブジェクトを補正し表示部に表示することを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    Even when the shape of the object is non-planar, the control unit grasps the shape of the object based on the information from the three-dimensional sensor and corrects the AR object according to the shape of the object. A head-mounted display characterized by displaying on the display unit.
  9.  請求項1記載のヘッドマウントディスプレイであって、
     前記ARオブジェクトはお手本画像であり、前記対象物は手書き面であって、
     前記表示部に表示された前記お手本画像の虚像が前記手書き面に表示されることを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 1.
    The AR object is a model image, and the object is a handwritten surface.
    A head-mounted display characterized in that a virtual image of the model image displayed on the display unit is displayed on the handwritten surface.
  10.  請求項9記載のヘッドマウントディスプレイであって、
     前記お手本画像の虚像を、前記手書き面に手書きされた前後で異なる表示に変化させることにより、手書き画像の作成を支援することを特徴とするヘッドマウントディスプレイ。
    The head-mounted display according to claim 9.
    A head-mounted display characterized in that it supports the creation of a handwritten image by changing the virtual image of the model image into different displays before and after handwriting on the handwritten surface.
PCT/JP2020/000198 2020-01-07 2020-01-07 Head-mounted display for displaying ar object WO2021140575A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/000198 WO2021140575A1 (en) 2020-01-07 2020-01-07 Head-mounted display for displaying ar object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/000198 WO2021140575A1 (en) 2020-01-07 2020-01-07 Head-mounted display for displaying ar object

Publications (1)

Publication Number Publication Date
WO2021140575A1 true WO2021140575A1 (en) 2021-07-15

Family

ID=76788494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/000198 WO2021140575A1 (en) 2020-01-07 2020-01-07 Head-mounted display for displaying ar object

Country Status (1)

Country Link
WO (1) WO2021140575A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11531805B1 (en) 2021-12-09 2022-12-20 Kyndryl, Inc. Message composition and customization in a user handwriting style

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014518596A (en) * 2011-03-29 2014-07-31 クアルコム,インコーポレイテッド Modular mobile connected pico projector for local multi-user collaboration
JP2017167275A (en) * 2016-03-15 2017-09-21 京セラドキュメントソリューションズ株式会社 Character learning system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014518596A (en) * 2011-03-29 2014-07-31 クアルコム,インコーポレイテッド Modular mobile connected pico projector for local multi-user collaboration
JP2017167275A (en) * 2016-03-15 2017-09-21 京セラドキュメントソリューションズ株式会社 Character learning system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAMAUCHI, MAKOTO.: "Evaluation of Drawing Operations Support Using Augmented Reality.", PROCEEDINGS OF THE 17TH VIRTUAL REALITY SOCIETY OF JAPAN CONFERENCE, pages 268 - 269 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11531805B1 (en) 2021-12-09 2022-12-20 Kyndryl, Inc. Message composition and customization in a user handwriting style

Similar Documents

Publication Publication Date Title
JP7200195B2 (en) sensory eyewear
US20210407203A1 (en) Augmented reality experiences using speech and text captions
US11366516B2 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
JP6323040B2 (en) Image processing apparatus, image processing method, and program
US10318011B2 (en) Gesture-controlled augmented reality experience using a mobile communications device
US20190339840A1 (en) Augmented reality device for rendering a list of apps or skills of artificial intelligence system and method of operating the same
US20140068526A1 (en) Method and apparatus for user interaction
US20200202397A1 (en) Wearable Terminal, Information Processing Terminal, Non-Transitory Computer Readable Storage Medium, and Product Information Display Method
KR20190053001A (en) Electronic device capable of moving and method for operating thereof
CN111742281A (en) Electronic device for providing second content according to movement of external object for first content displayed on display and operating method thereof
CN111626183A (en) Target object display method and device, electronic equipment and storage medium
KR20200040716A (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
WO2021140575A1 (en) Head-mounted display for displaying ar object
KR20210017081A (en) Apparatus and method for displaying graphic elements according to object
KR20190134975A (en) Augmented realtity device for rendering a list of apps or skills of artificial intelligence system and method of operating the same
US20220244788A1 (en) Head-mounted display
US20220375362A1 (en) Virtual tutorials for musical instruments with finger tracking in augmented reality
KR20200111144A (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
JP2021009552A (en) Information processing apparatus, information processing method, and program
US20240079031A1 (en) Authoring tools for creating interactive ar experiences
US20230410441A1 (en) Generating user interfaces displaying augmented reality graphics
US20240077983A1 (en) Interaction recording tools for creating interactive ar stories
KR102659357B1 (en) Electronic device for providing avatar animation and method thereof
US20240077984A1 (en) Recording following behaviors between virtual objects and user avatars in ar experiences
US20240119928A1 (en) Media control tools for managing communications between devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912786

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 20912786

Country of ref document: EP

Kind code of ref document: A1