WO2018235128A1 - Display editing apparatus, display editing method, and display editing program - Google Patents

Display editing apparatus, display editing method, and display editing program Download PDF

Info

Publication number
WO2018235128A1
WO2018235128A1 PCT/JP2017/022519 JP2017022519W WO2018235128A1 WO 2018235128 A1 WO2018235128 A1 WO 2018235128A1 JP 2017022519 W JP2017022519 W JP 2017022519W WO 2018235128 A1 WO2018235128 A1 WO 2018235128A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
unit
positional relationship
graphic
graphic information
Prior art date
Application number
PCT/JP2017/022519
Other languages
French (fr)
Japanese (ja)
Inventor
喬之 築谷
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to DE112017007535.1T priority Critical patent/DE112017007535B4/en
Priority to JP2019524719A priority patent/JP6671549B2/en
Priority to PCT/JP2017/022519 priority patent/WO2018235128A1/en
Publication of WO2018235128A1 publication Critical patent/WO2018235128A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/10Automotive applications
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers

Definitions

  • the present invention relates to a display editing apparatus that calculates graphic information for a drawing apparatus that performs drawing on an object.
  • an acquisition unit for acquiring an image to be displayed a display unit for displaying the image to be displayed obtained by the acquisition unit on a display surface, and an OSD for the image to be displayed (On Screen Display)
  • an OSD image having a different shape is superimposed on the image to be displayed according to the position of the operator who has input the instruction, and is displayed on the display surface.
  • a control means for controlling the display means for controlling the display means.
  • an operator who is at a non-facing position with respect to the display surface can visually recognize, for example, a rectangular OSD image as a desired figure.
  • the figure It is necessary for a person editing the information to edit while imagining a figure when the figure is visually recognized from the non-facing position, and there is a problem that such editing work is difficult.
  • the present invention has been made to solve the above-described problems, and a drawing apparatus that allows a viewer to visually recognize a desired figure when looking at a figure drawn on an object from a non-facing position. It is an object of the present invention to provide a display editing apparatus, a display editing method, and a display editing program capable of editing graphic information for instructing drawing.
  • a display editing apparatus includes: a captured image acquisition unit that acquires an image obtained by capturing an object by a camera; a graphic reception unit that receives first graphic information related to a first graphic drawn on the image; The second graphic information is calculated from the positional relationship acquisition unit that acquires the positional relationship between the object and the camera when the image is captured, the positional relationship acquired by the positional relationship acquisition unit, and the first graphic information.
  • the graphic information for instructing the drawing apparatus to draw is edited so that a desired figure can be visually recognized when the viewer looks at the figure drawn on the object from the non-facing position. Can.
  • FIG. 1 is a diagram showing an exemplary configuration of a display editing system according to a first embodiment.
  • FIG. 1 is a block diagram showing a configuration example of a display editing device of a first embodiment.
  • FIG. 1 is a block diagram showing a configuration example of a projector of a first embodiment.
  • 5 is a flowchart for explaining the operation of the display editing apparatus of the embodiment 1; In Embodiment 1, it is a figure explaining an example of the use scene of a display editing apparatus.
  • 6A and 6B are diagrams showing an example of the hardware configuration of the display / editing apparatus according to Embodiment 1 of the present invention.
  • FIG. 13 is a block diagram showing an example of configuration of a display / editing apparatus of a second embodiment.
  • FIG. 13 is a flow chart for explaining the operation of the display editing apparatus of the embodiment 2;
  • FIG. 18 is a block diagram showing an example of configuration of a display / editing apparatus of a third embodiment.
  • the display editing apparatus is a graphic for instructing the drawing apparatus to draw so that a desired figure can be visually recognized when the viewer looks at the figure drawn on the object by the drawing apparatus from the non-facing position. Edit the information
  • graphics in this specification includes not only simple geometrical patterns but also characters, symbols and the like.
  • the graphic may be a still image or a moving image.
  • the display editing apparatus is mounted on a tablet PC (Personal Computer) including a camera and a touch panel.
  • the drawing device is assumed to be a projector attached to a car.
  • the projector is a projector that draws a figure on an object by projecting light.
  • the figure drawn on the object is a still image
  • the object is a road surface which is a projection surface on which light is projected by the projector.
  • the assumed viewer is a person on the road around the road surface.
  • the user using the display editing apparatus according to the first embodiment is a still image of a desired shape (hereinafter referred to as “visible image”) that can be viewed when a viewer on the road around the road looks at the still picture projected on the road by the projector.
  • visible image a desired shape
  • the display editing apparatus edits the graphic information for instructing the projector to project based on the still image drawn by the user I assume.
  • a captured image obtained by capturing an image of the road surface from the viewpoint of the viewer in advance is displayed on the tablet PC by the camera, and the user sees the viewer's viewpoint
  • a desired still image as viewed from the point of view can be drawn at a desired position on the captured image displayed on the tablet PC, with the shape as it looks.
  • FIG. 1 is a diagram showing a configuration example of a display editing system according to the first embodiment.
  • the tablet PC 100 causes the display unit 102 (described later) to display an image captured by a camera 101 (described later) provided on the back surface.
  • the display unit 102 is a touch panel display. The user draws a desired still image by touching the display unit 102 with a finger or a so-called stylus.
  • the display editing apparatus 1 edits the graphic information based on the still image input by the user drawing on the display unit 102 and outputs the graphic information to the projector 2, and the projector 2 applies the still image indicated by the graphic information to the road surface. Project.
  • the tablet PC 100 and the projector 2 may be able to communicate with each other by any means, whether wired or wireless.
  • FIG. 2 is a block diagram showing an example of the configuration of the display editing apparatus 1 according to the first embodiment.
  • the display editing apparatus 1 includes a captured image acquisition unit 111, a figure reception unit 112, a positional relationship acquisition unit 113, a figure position determination unit 114, a coordinate conversion unit 115, and an instruction unit 116.
  • the captured image acquisition unit 111 acquires a captured image in which the camera 101 captures a road surface.
  • the captured image acquisition 11 outputs the acquired captured image to the positional relationship acquisition unit 113 and the display control unit 117.
  • the captured image acquired by the captured image acquisition unit 111 and output to the positional relationship acquisition unit 113 and the display control unit 117 is simply referred to as an image.
  • the graphic receiving unit 112 receives graphic information (hereinafter referred to as “first graphic information”) related to a graphic drawn on the image displayed on the display unit 102 (hereinafter referred to as “first graphic”).
  • first graphic information a graphic drawn on the image displayed on the display unit 102
  • first graphic a graphic drawn on the image displayed on the display unit 102
  • the graphic receiving unit 112 receives first graphic information on a first graphic drawn by the user.
  • the figure reception unit 112 outputs the received first figure information to the figure position determination unit 114.
  • the positional relationship acquisition unit 113 acquires the positional relationship between the road surface and the camera 101 when the captured image acquired by the captured image acquisition unit 111 is captured. Further, the positional relationship acquisition unit 113 acquires the positional relationship between the projector 2 and the camera 101 when the captured image acquired by the captured image acquisition unit 111 is captured. The positional relationship acquisition unit 113 outputs the acquired positional relationship information to the figure position determination unit 114.
  • the figure position determination unit 114 outputs the information on the positional relationship between the road surface and the camera 101, the information on the positional relationship between the projector 2 and the camera 101, which is output from the positional relationship acquisition unit 113, and Based on the graphic information of 1, it is determined to which position on the road surface the first graphic is associated. At this time, when the first figure is outside the road surface, the figure position determination unit 114 outputs the error information indicating that the first figure can not be projected onto the road surface, the display control unit 117 or the voice output. It is output to the control unit 118.
  • the figure position determination unit 114 When the first figure is on the road surface, the figure position determination unit 114 combines the first figure information with the information on the positional relationship between the road surface and the camera 101, and the information on the positional relationship between the projector 2 and the camera 101. It is output to the coordinate conversion unit 115.
  • the coordinate conversion unit 115 When the first graphic information and the like are output from the graphic position determination unit 114, the coordinate conversion unit 115 outputs information on the positional relationship between the road surface and the camera 101, information on the positional relationship between the projector 2 and the camera 101, The coordinate of the first graphic information is converted from the graphic information of 1, and the graphic information (hereinafter, “second”) regarding the graphic to be projected onto the road surface by the projector 2 (hereinafter referred to as “second graphic”) Calculate “graphic information”.
  • the second figure is a figure that is viewed by the viewer without changing the shape of the first figure as viewed from the camera position when the captured image acquired by the captured image acquisition unit 111 is captured. It is a figure to be projected by the projector 2 in order to draw on an object.
  • the coordinate conversion unit 115 outputs the calculated second graphic information to the instruction unit 116.
  • the instruction unit 116 outputs the second graphic information calculated by the coordinate conversion unit 115 to the projector 2 that projects the graphic on the road surface, and the projector 2 outputs the second graphic indicated by the second graphic information. , Project on the road surface.
  • the display control unit 117 causes the display unit 102 to display the image acquired by the captured image acquisition unit 111.
  • the display control unit 117 causes the display unit 102 to respond to the error information such as an error message. Display information.
  • the voice output control unit 118 When the voice output control unit 118 outputs error information indicating that the first figure can not be projected onto the road surface from the figure position determination unit 114, the voice output device 103 generates a buzzer sound, an error message, or the like. , Output sound or sound according to the error information.
  • the audio output device 103 is a speaker provided in the tablet PC 100.
  • the storage unit 119 stores information related to the projector, such as the position or posture of the projector 2.
  • the storage unit 119 also stores information on the shape of the road surface, such as unevenness.
  • the information to be stored in the storage unit 119 may be set in advance by the user or the like, or with regard to the position of the projector 2 or the like, when the display editing device 1 is used as described later. It may be set based on the information detected by itself.
  • the storage unit 119 is included in the display editing apparatus 1. However, the present invention is not limited to this.
  • the storage unit 119 can refer to the display editing apparatus 1 outside the display editing apparatus 1. It may be prepared in place.
  • FIG. 3 is a block diagram showing a configuration example of the projector 2 according to the first embodiment.
  • the projector 2 includes a drawing instruction receiving unit 21 and a drawing unit 22.
  • the drawing instruction receiving unit 21 receives the second graphic information output from the display editing apparatus 1.
  • the drawing instruction receiving unit 21 outputs the received second graphic information to the drawing unit 22.
  • the drawing unit 22 projects the second graphic indicated by the second graphic information received by the drawing instruction receiving unit 21 on the road surface.
  • FIG. 4 is a flowchart for explaining the operation of the display editing device 1 according to the first embodiment.
  • FIG. 5 is a diagram for explaining an example of a use scene of the display editing apparatus 1 in the first embodiment. As shown in FIG. 5, the user picks up the tablet PC 100 and observes the projector 2, the automobile 51 to which the projector 2 is attached, and the periphery thereof through the tablet PC 100.
  • the user uses the camera of the tablet PC 100 to, from the viewpoint position where the viewer views the still image projected by the projector 2, the road surface on which the still image is to be drawn, the projector 2, and To capture an image at least including:
  • the projector 2 is in a state capable of emitting an arbitrary still image to the road surface. Assuming that the state is as described above, the display editing device 1 performs the operation as described in the flowchart of FIG. 4.
  • the figure reception unit 112 waits until the first figure information is received (in the case of “NO” in step ST401), and when the first figure information is received, the received first figure information is sent to the figure position determination unit 114. Output.
  • the graphic receiving unit 112 receives the drawn first graphic information at any time, and thereafter, the user draws the first graphic even by a small amount. Then, each time the first graphic is drawn, the graphic receiving unit 112 continues to receive the first graphic information.
  • the positional relationship acquisition unit 113 acquires the positional relationship between the road surface and the camera 101. Further, the positional relationship acquisition unit 113 acquires the positional relationship between the projector 2 and the camera 101 (step ST402). Specifically, for example, a marker is installed in advance at a predetermined position in the space within the imaging range of the camera 101, and the positional relationship acquisition unit 113 detects the marker in the video to detect the road surface. And the camera 101, and the three-dimensional positional relationship between the projector 2 and the camera 101 is acquired. Note that this is merely an example, and the positional relationship acquisition unit 113 may acquire the positional relationship between the road surface and the camera 101, and between the projector 2 and the camera 101, using another existing method.
  • the positional relationship acquisition unit 113 may obtain the positional relationship using feature quantities in the space, or may obtain the positional relationship using an existing application or the like that recognizes the surrounding three-dimensional space. .
  • the three-dimensional tracker may be attached to the tablet PC 100, and the positional relationship acquisition unit 113 may acquire the positional relationship from the measurement information of the three-dimensional tracker.
  • the display editing apparatus 1 stores in advance information on the mounting position of the projector 2 on the vehicle and the mounting attitude, and information on the shape of the road surface when the road surface includes unevenness and the like.
  • the positional relationship acquisition unit 113 outputs the acquired positional relationship information to the figure position determination unit 114.
  • the graphic position determination unit 114 is output from the graphic reception unit 112, the information of the positional relationship between the road surface and the camera 101, the information of the positional relationship between the projector 2 and the camera 101, which is output from the positional relationship acquisition unit 113. Based on the first graphic information, it is determined which position on the road surface the first graphic is associated with (step ST403).
  • the figure position determination unit 114 determines whether the first figure is on the road surface (step ST404), and if it is outside the road surface (in the case of “NO” in step ST404), the first Error information indicating that the figure can not be projected onto the road surface is output to the display control unit 117 or the voice output control unit 118 (step ST407).
  • the display control unit 117 causes the display unit 102 to display information corresponding to the error information, such as an error message.
  • the voice output control unit 118 outputs a sound or voice according to the error information, such as a buzzer sound or an error message, to the voice output device 103.
  • the figure position determination unit 114 may output the error information to both the display control unit 117 and the audio output control unit 118.
  • the figure position determination unit 114 sets the first figure information to the information on the positional relationship between the road surface and the camera 101;
  • the coordinate conversion unit 115 outputs information on the positional relationship between the projector 2 and the camera 101.
  • the coordinate conversion unit 115 uses the information on the positional relationship between the road surface and the camera 101, the information on the positional relationship between the projector 2 and the camera 101, and the first graphical information output by the graphic position determination unit 114 in step ST404.
  • the projector 2 calculates second graphic information on a second graphic to be projected on the road surface (step ST405).
  • the coordinate conversion unit 115 outputs the calculated second graphic information to the instruction unit 116.
  • the instructing unit 116 outputs the second graphic information calculated by the coordinate conversion unit 115 in step ST405 to the projector 2 (step ST406).
  • the drawing instruction receiving unit 21 receives the second graphic information output from the instructing unit 116, and the drawing unit 22 projects the second graphic indicated by the second graphic information on the road surface.
  • the figure reception unit 112 subsequently determines whether or not the first figure information is received (step ST408), and when the first figure information is received (in the case of “YES” in step ST408), the process returns to step ST402. Repeat the subsequent processing. If the first graphic information is not received ("NO" in step ST408), the process ends.
  • the case where the first graphic information is not received means, for example, when new first graphic information is not inputted for a predetermined time, or when the user performs an operation of ending the input process of the first graphic information. It is.
  • a user has a procedure of editing a figure or the like to be displayed on a personal computer and projecting the figure or the like on a projector.
  • a user edits a figure to be displayed using an image editing software etc. in a state where it is viewed from the directly facing position, but when projecting a created figure, what kind of It can only be imagined by the user if it looks like, for example, editing is performed while minding the appearance when viewed from the non-facing position, making the task of creating a figure difficult for the user.
  • the scale is naturally different on the screen of the personal computer and on the real space, when the figure displayed on the screen of the personal computer is actually displayed, it looks different from what the user expected. There is also a fear.
  • the user when the viewer looks at the figure drawn on the object from the non-facing position, the user desires the desired figure to be made visible by the viewer, It is possible to draw on the tablet PC 100 as the first figure as it is. As a result, it becomes easy for the user to edit the figure, and the quality of the edited figure (second figure) is also improved. Furthermore, since the input from the tablet PC 100 can be performed on an image obtained by imaging an object on which a graphic is to be actually projected, the user can also work while confirming the actual sense of scale. The image of the figure desired to be projected does not shift, and high-quality figure can be efficiently projected.
  • FIG. 6A and 6B are diagrams showing an example of the hardware configuration of the display editing device 1 according to the first embodiment of the present invention.
  • Each function of the control unit 118 is realized by the processing circuit 601. That is, the display editing apparatus 1 converts the first graphic into the second graphic based on the acquired captured image and the first graphic information received from the user, and performs processing for performing control to project the first graphic.
  • the processing circuit 601 may be dedicated hardware as shown in FIG. 6A or may be a CPU (Central Processing Unit) 605 that executes a program stored in the memory 606 as shown in FIG. 6B.
  • CPU Central Processing Unit
  • the processing circuit 601 may be, for example, a single circuit, a complex circuit, a programmed processor, a parallel programmed processor, an application specific integrated circuit (ASIC), an FPGA (field-programmable) Gate Array) or a combination thereof is applicable.
  • ASIC application specific integrated circuit
  • FPGA field-programmable Gate Array
  • the processing circuit 601 is the CPU 605, the captured image acquisition unit 111, the figure reception unit 112, the positional relationship acquisition unit 113, the figure position determination unit 114, the coordinate conversion unit 115, the instruction unit 116, and the display control unit 117.
  • Each function of the audio output control unit 118 is realized by software, firmware, or a combination of software and firmware.
  • the captured image acquisition unit 111, the graphic reception unit 112, the positional relationship acquisition unit 113, the graphic position determination unit 114, the coordinate conversion unit 115, the display control unit 117, and the audio output control unit 118 A hard disk drive (602), a CPU 605 that executes a program stored in a memory 606 or the like, and a processing circuit such as a system LSI (Large-Scale Integration).
  • the programs stored in the HDD 602, the memory 606, and the like are the captured image acquisition unit 111, the figure reception unit 112, the positional relationship acquisition unit 113, the figure position determination unit 114, the coordinate conversion unit 115, and the instruction unit 116.
  • the computer causes the computer to execute the procedures and methods of the display control unit 117 and the audio output control unit 118.
  • the memory 606 is, for example, a non-volatile memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM), and an electrically erasable programmable read only memory (EEPROM).
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • Semiconductor memory magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc), and the like.
  • the captured image acquisition unit 111, the graphic reception unit 112, the positional relationship acquisition unit 113, the graphic position determination unit 114, the coordinate conversion unit 115, the instruction unit 116, the display control unit 117, and the audio output control unit For each function of 118, a part may be realized by dedicated hardware and a part may be realized by software or firmware.
  • the processing circuit 601 as dedicated hardware realizes the function of the captured image acquisition unit 111, and the graphic reception unit 112, the positional relationship acquisition unit 113, the graphic position determination unit 114, and the coordinate conversion unit 115
  • the functions of the instruction unit 116, the display control unit 117, and the audio output control unit 118 can be realized by the processing circuit reading and executing a program stored in the memory 606.
  • the storage unit 119 uses, for example, the HDD 602. Note that this is merely an example, and the storage unit 119 may be configured by a DVD, a memory 606, and the like.
  • the display editing apparatus 1 further includes an input interface device 603 and an output interface device 604 that communicate with an external device such as the camera 101, the display unit 102, or the projector 2.
  • the captured image acquisition unit 111 acquires a captured image captured by the camera 101 using the input interface device 603.
  • the graphic receiving unit 112 acquires first graphic information by an input operation of the user using the input interface device 603.
  • the instruction unit 116 transmits the second graphic information to the projector 2 using the output interface device 604.
  • the display editing apparatus 1 relates to a captured image acquisition unit 111 that acquires an image obtained by capturing an object by the camera 101, and a first figure drawn on the image.
  • the positional relationship acquisition unit 113 that acquires the positional relationship between the object when the image is captured and the camera, the positional relationship acquired by the positional relationship acquisition unit 113, and the graphic reception unit 112 that receives the first graphic information.
  • a coordinate conversion unit 115 that calculates second graphic information from the first graphic information, and an instruction to output the second graphic information calculated by the coordinate conversion unit 115 to a drawing device that draws on the object And the unit 116.
  • the position of the camera when photographing an object can be any position. Therefore, in the first embodiment, in particular, the drawing apparatus is instructed to draw so that a desired figure is recognized when the viewer looks at the figure drawn on the object from the non-facing position.
  • a figure for instructing the drawing device to draw a figure drawn on the object including a case where the viewer wishes to recognize a desired figure when viewed from the “right-facing position” Information can be edited.
  • the user often draws the second figure projected from the projector 2 in order to draw the first figure in a state where the surface of the object on which the figure is projected is viewed from the non-facing position.
  • the figure does not become symmetrical when viewed from the projector 2 side or the car side.
  • a pedestrian or the like around a car is assumed as a viewer by a figure projected from the projector 2 attached to the car, and in the case of an application that causes the pedestrian or the like to visually recognize the traveling direction of the car It is desirable to project from the projector 2 a figure that is symmetrical when viewed from the projector 2 side or the car side in order to enable the person etc. to accurately view the traveling direction of the car.
  • the user draws the first graphic while looking at the surface of the object on which the graphic is projected from the non-facing position, the user sees from the position of the assumed viewer.
  • the display editing apparatus 1a (described later) is mounted on the tablet PC 100, and the user is attached to a car by a viewer on the road around the road surface.
  • a desired still image visible when viewing a still image projected on the road surface by the projector 2 is drawn using the tablet PC 100, and the display editing apparatus based on the still image drawn by the user It is assumed that graphic information for instructing the projector 2 to project is edited.
  • the configuration of the display editing system according to the second embodiment is the same as the configuration described with reference to FIG. 1 in the first embodiment, and therefore redundant description will be omitted. Further, the configuration of the projector 2 according to the second embodiment is the same as the configuration described with reference to FIG. 3 in the first embodiment, and thus the redundant description will be omitted.
  • FIG. 7 is a block diagram showing a configuration example of the display editing device 1a according to the second embodiment.
  • the same components as those described with reference to FIG. 2 in the first embodiment are denoted by the same reference numerals, and the description thereof will not be repeated.
  • the display editing apparatus 1a differs from the display editing apparatus 1 according to the first embodiment in that the display editing apparatus 1a further includes a figure complementing unit 120.
  • the graphic complementing unit 120 causes the display control unit 117 to display a guide superimposed on the image displayed on the display unit 102 at an arbitrary timing before the drawing of the first graphic information by the user starts. Further, when the first graphic information received by the graphic receiving unit 112 is acquired, the first graphic is complemented so as to be symmetrical with respect to the projector 2 side or the automobile side based on the first graphic information, and the complementation is performed.
  • the graphic information of the later first graphic (hereinafter referred to as “first graphic information after complementation”) is created.
  • the figure complementing unit 120 displays, as a guide, an image of a grid that is symmetrical with respect to the projector 2 side or the vehicle side.
  • the figure complementing unit 120 superimposes and displays the guide on the position of the road surface in the image.
  • the figure complementing unit 120 outputs the first figure information after complementation to the figure position determination unit 114. Further, the figure complementing unit 120 also outputs the first figure information after complementation to the display control unit 117, superimposes it on the image displayed by the display unit 102, and causes the first figure information after complementation to be displayed. (In FIG.
  • the figure position determination unit 114 outputs the information on the positional relationship between the road surface and the camera 101, the information on the positional relationship between the projector 2 and the camera 101, and the complementation output from the figure complementing unit 120, which are output from the positional relationship acquisition unit 113. Based on the post-first graphic information, it is determined which position on the road surface the first graphic after complementation is associated with.
  • the hardware configuration of the display editing apparatus 1a is the same as the configuration described with reference to FIGS. 6A and 6B in the first embodiment, and thus the redundant description will be omitted.
  • the figure complementing unit 120 includes a captured image acquisition unit 111, a figure reception unit 112, a positional relationship acquisition unit 113, a figure position determination unit 114, a coordinate conversion unit 115, an instruction unit 116, a display control unit 117, and an audio output control unit 118. Similarly, it is realized by the processing circuit 601.
  • FIG. 8 is a flowchart for explaining the operation of the display editing device 1a according to the second embodiment.
  • the specific operations of steps ST 801 and 803 to 809 in FIG. 8 are the same as the specific operations of steps ST 401 to 408 in FIG. 4 described in the first embodiment, and therefore the description thereof will not be repeated.
  • step ST802 of FIG. 8 is added to the operation described with reference to FIG.
  • step ST801 when the graphic receiving unit 112 receives the first graphic information, the graphic receiving unit 112 outputs the received first graphic information to the graphic complementing unit 120.
  • the figure complementing unit 120 complements the first figure so as to be symmetrical on the basis of the first figure information output from the figure accepting unit 112, and creates the first figure information after complementation.
  • the figure complementing unit 120 outputs the first figure information after complementation to the figure position determination unit 114. Further, the figure complementing unit 120 also outputs the complemented first figure information to the display control unit 117.
  • the display control unit 117 causes the first figure after complementation indicated by the first figure information after complementation to be superimposed on the image displayed by the display unit 102.
  • FIG. 9 is a diagram showing an example of a screen of the display unit 102 on which the guide and the first figure after complementation are displayed in the second embodiment.
  • the user inputs an oblique line (9a in FIG. 9) as the first figure.
  • the figure complementing unit 120 complements the oblique line (9b in FIG. 9) so that the first figure input by the user is symmetrical with respect to the projector 2 side or the car side.
  • the positional relationship acquisition unit 113 acquires the positional relationship between the road surface and the camera 101, and the projector 2 and the camera 101 (step ST803), and the graphic position determination unit 114 outputs the road surface and the camera output from the positional relationship acquisition unit 113.
  • the first figure after complementation is based on the information on the positional relationship with 101, the information on the positional relationship between the projector 2 and the camera 101, and the first complemented first graphic information output from the graphic complement unit 120. It is determined to which position on the road surface it corresponds (step ST804).
  • the figure complementing unit 120 displays the grid as a guide, but this is merely an example.
  • the figure complementing unit 120 may display, as a guide, arbitrary information for assisting an input to a user who intends to edit a figure that is symmetrical when viewed from the projector 2 side or the car side.
  • the figure complementing unit 120 does not display the guide, but complements the first figure so that the first figure is symmetrical left and right so that only the complemented first symmetrical figure is displayed. It is also good.
  • the figure complementing unit 120 complements the first figure so that the first figure is symmetrical.
  • the method of complementation is not limited to the symmetrical.
  • the figure complementing unit 120 may complement the first figure so as to be, for example, vertically symmetrical or vertically symmetrical as viewed from the projector side or the automobile side, or may be complemented as being vertically symmetrical It may be complemented so that it becomes, or complemented so that it becomes point-symmetrical.
  • the first figure is complemented so that the first figure is symmetrical based on the first figure information received by the figure accepting unit 112, and the first after interpolation is performed.
  • the coordinate conversion unit 115 generates the second figure from the positional relation acquired by the positional relation acquisition unit 113 and the first figure information after complementation generated by the figure complement unit 120. It was made to calculate figure information.
  • the figure which is symmetrical as viewed from the drawing device can be displayed, and a figure of high quality can be created. .
  • the car attached with the projector 2 is at a standstill, the position of the projector 2 is also moved as the car travels, or the travel state of the car changes
  • the situation related to the projector 2 was not supposed to change.
  • the display editing apparatus 1b (described later) is mounted on the tablet PC 100, and the user is a viewer on the road around the road surface.
  • FIG. 10 is a block diagram showing a configuration example of the display editing device 1b according to the third embodiment.
  • the same components as those described with reference to FIG. 2 in the first embodiment are denoted by the same reference numerals, and redundant description will be omitted.
  • the display editing apparatus 1b differs from the display editing apparatus 1 according to the first embodiment in that the display editing apparatus 1b further includes a status information acquisition unit 121 and a graphic information acquisition unit 122.
  • the configuration of the display editing system according to the third embodiment is the same as the configuration described with reference to FIG. 1 in the first embodiment, and therefore redundant description will be omitted. Further, the configuration of the projector 2 according to the third embodiment is the same as the configuration described with reference to FIG. 3 in the first embodiment, and thus the redundant description will be omitted.
  • the status information acquisition unit 121 acquires status information of the projector 2.
  • the situation information acquisition unit 121 acquires vehicle information on the vehicle as the situation information of the projector 2 from the car to which the projector 2 is attached.
  • the vehicle information is various information related to the vehicle including the information related to the projector 2 attached to the vehicle, and is any information that can be a trigger for changing the still image projected from the projector 2.
  • the vehicle information may be position information of the car, position information of the projector 2 attached to the car, a state in which the car is parked, a direction in which the car is traveling, or a traveling speed of the car Information on the driving condition of the car.
  • the situation information acquisition unit 121 outputs the acquired vehicle information to the graphic information acquisition unit 122.
  • the graphic information acquisition unit 122 stores the situation of the projector 2 indicated by the vehicle information output from the situation information acquisition unit 121 in association with the second graphic information calculated in the situation in the storage unit 119. Therefore, in addition to the information described in the first and second embodiments, the storage unit 119 stores the situation of the projector 2 and the second graphic information corresponding to the situation in association with each other. For example, assuming that the user is supposed to stay at one position where the viewer is supposed, a first figure to be viewed by the viewer at each of a plurality of different points with respect to the position. Are drawn on the screen of the tablet PC 100 to draw a plurality of first figures.
  • the display editing apparatus 1b calculates second graphic information for each of a plurality of first graphic information indicating a plurality of first graphics.
  • the graphic information acquisition unit 122 associates the position information of the related automobile with each of the plurality of pieces of second graphic information, and causes the storage unit 119 to store the positional information. Details will be described later.
  • the hardware configuration of the display editing apparatus 1b is the same as the configuration described with reference to FIGS. 6A and 6B in the first embodiment, and thus the redundant description will be omitted.
  • the status information acquisition unit 121 and the graphic information acquisition unit 122 include a captured image acquisition unit 111, a graphic reception unit 112, a positional relationship acquisition unit 113, a graphic position determination unit 114, a coordinate conversion unit 115, an instruction unit 116, and a display control unit. Similar to the audio output control unit 118 and the audio output control unit 118, the audio output control unit 118 can be realized by the processing circuit 601.
  • the display editing device 1b of the third embodiment when the projector 2 changes the figure to be projected according to the position of the car, the user operates the display editing apparatus 1b to edit the figure at each position explain.
  • the user operates the touch panel of the tablet PC 100 in advance in a state where the vehicle to which the projector 2 is attached is at a plurality of different points, and causes the first displayed on the image displayed on the display unit 102.
  • Draw a figure it is assumed that the plurality of points are, for example, three points, and the user draws a first figure for a state in which the car is at each of the three points.
  • the display editing apparatus 1b receives and determines the positional relationship between the road surface and the camera 101, and the positional relationship between the projector 2 and the camera 101, each time at the three points, accepting the first figure drawn by the user. From the positional relationship information and the first graphic information, second graphic information to be projected on the road surface by the projector 2 is calculated.
  • the operation is the same as the operation described using the flowchart in FIG. 4 in the first embodiment, but in the third embodiment, the following operation is further performed in the display editing device 1b.
  • the coordinate conversion unit 115 calculates second graphic information (see step ST405 in FIG. 4)
  • the coordinate conversion unit 115 outputs the second graphic information to the instruction unit 116 and the graphic information acquisition unit 122.
  • the graphic information acquisition unit 122 stores the second graphic information in the storage unit 119 in association with the position of the projector 2 indicated by the vehicle information output from the situation information acquisition unit 121.
  • the timing at which the coordinate conversion unit 115 outputs the second graphic information to the graphic information acquisition unit 122 is at least one second graphic information calculated by the coordinate conversion unit 115 for one position of the projector 2. As long as it is stored.
  • the user when the user has finished drawing the first figure at one position of the projector 2, the user designates the timing when touching the storage button displayed on the display unit 102 or the like. be able to.
  • the position of the projector 2 and the second graphic information are associated with each other and stored in the storage unit 119 in each of three cases where the vehicle is at three different points in view of one assumed position of the viewer. It will be.
  • the operation of the display editing apparatus 1b in the case where the vehicle information stored in the storage unit 119 in association with the second graphic information is used as the positional information of the projector 2 has been described.
  • the display / editing apparatus 1b can associate the second graphic information with the vehicle information other than the position information of the projector 2 and store it in the storage unit 119.
  • the display editing apparatus 1b acquires, as the vehicle information, information on the parking state of the car or the direction in which the car travels, associates the second graphic information different from each other in the storage unit 119 with the states indicated by the respective information. It can be memorized.
  • the vehicle information in this case is information prepared for graphic editing, and the vehicle does not have to be in a state actually indicated by the vehicle information.
  • the display editing device 1b may obtain, as the vehicle information, information on the traveling speed of the car, and store the different pieces of second graphic information in the storage unit 119 in association with the states indicated by the respective information. it can.
  • the vehicle information in this case is also information prepared for graphic editing, and it is not necessary for the vehicle to actually travel at the traveling speed indicated by the vehicle information.
  • the user draws a first figure for each of a plurality of traveling speeds obtained by dividing the traveling speed, for example, every 10 km / hour, and the graphic information acquisition unit 122 generates the first graphic.
  • the different second graphic information calculated by the coordinate conversion unit 115 may be stored in the storage unit 119 in association with the traveling speed.
  • the display editing device 1b may use the threshold of the traveling speed of the car, the second figure when the traveling speed of the car is less than the threshold, and the second figure when the traveling speed of the car becomes equal to or more than the threshold.
  • the aspect of the modification may be stored in the storage unit 119.
  • the mode of deformation of the second graphic can be, for example, enlargement of the second graphic, or blinking.
  • the display editing device 1b it is possible to appropriately set which second graphic is to be stored in association with the situation of the projector 2 indicated by which vehicle information.
  • the plurality of pieces of second graphic information stored in the storage unit 119 are associated with the second graphic information associated with one situation of the projector 2 and the other situations of the projector 2 It is possible to complement the existing second image information with the second image information to calculate the second image information used in an intermediate situation between each situation.
  • the position of the projector 2 and the second graphic information are associated with each other and stored in the storage unit 119 in each of three cases where the car is at three different points in view of one assumed position of the viewer.
  • image processing is performed to continuously deform two pieces of second graphic information associated with two adjacent points among the three pieces of second graphic information already stored, and Second graphic information in the case where there is a car between can be calculated.
  • the projector 2 can project the continuously changing second graphic information on the road surface according to the movement of the automobile, and can perform graphic display without a sense of discomfort for the viewer.
  • the second graphic information including the second graphic information and the second graphic information acquired by the state information acquiring unit 121 that acquires the state of the drawing apparatus is obtained.
  • the storage unit 119 stores the information in association with the information indicating the state of the drawing apparatus when the information is calculated. Thereby, even when a change in the situation regarding the drawing device occurs, depending on the situation, the viewer can recognize a desired figure when looking at the figure drawn on the object from the non-facing position.
  • the graphic information for instructing the drawing device to draw can be edited.
  • the third embodiment is applied to the first embodiment, and the display editing device 1 b is different from the display editing device 1 of the first embodiment in the situation information acquisition unit 121 and the figure.
  • the information acquisition unit 122 is further provided.
  • the present invention is not limited to this, and the third embodiment may be applied to the second embodiment. That is, the display editing apparatus 1 b may further include a status information acquisition unit 121 and a graphic information acquisition unit 122 in addition to the display editing apparatus 1 a of the second embodiment described with reference to FIG. 7.
  • the projector 2 attached to a car projects a figure as a still image on the road surface
  • the display editing device 1 to 1b is a still image It is assumed that the figure is edited.
  • the display editing apparatus 1-1 b edits the graphic as a moving image It can also be done.
  • the user may draw an animation such as an animation as the first figure using an existing drawing application or the like.
  • the second graphic information calculated in the display editing apparatus 1-1 b can be extracted in any data format, regardless of whether the second graphic information is a still image or a moving image. Therefore, for example, when it is desired that the second graphic indicated by the second graphic information calculated based on the first graphic information is also projected from the projector attached to another car in the display editing device 1 to 1b. Takes out the second graphic information from the display editing device 1-1 b and copies the data to a storage device of a control device that controls a projector attached to another car, It can be projected and reproduced from a projector attached to a car of
  • the display editing devices 1 to 1b are mounted on the tablet PC 100, but the present invention is not limited to this.
  • the device on which the display editing device 1-1 b is mounted can view the real space on which the second graphic is drawn on the display screen or directly by the user, and the user can Any device can be used as long as it can input and display a still image or a moving image so as to overlap and be seen in space.
  • a video camera is installed on the back of the display of a normal PC, and the user captures an image of the real space with the video camera and displays it on the display screen of the PC.
  • the first figure may be drawn on the real space image.
  • an apparatus on which the display editing apparatus 1 to 1b is mounted is a see-through head mounted display, and the user mounts the head mounted display and recognizes the user's fingertips with a three-dimensional tracker.
  • the configuration may be such that the user can draw the first figure over the real space directly while looking at the real space.
  • the target on which the graphic is drawn is the road surface
  • the drawing device is the projector 2 that projects light on the road surface.
  • the drawing apparatus may be a robot that draws on a whiteboard or the like by an arm holding a marker or the like, or may be a self-propelled robot that draws on a ground or the like using a tape or the like.
  • the object is not limited to the road surface, and may be an arbitrary surface such as a white board. Further, the object may be a surface including unevenness.
  • the drawing device is not limited to the projector 2 but may be any device capable of drawing a figure, such as an arm of a robot.
  • the drawing apparatus may be an apparatus in which a mask whose shape is mechanically changed is attached to a light source, and a figure is visually recognized by changing the shape of the mask.
  • the drawing device is the projector 2 that projects light on the road surface
  • the display editing devices 1 to 1b use the second graphic information for causing the projector 2 to project a desired graphic. It was to edit.
  • the second figure projected from the projector 2 may be deformed on the road surface due to the positional relationship between the projector 2 and the road surface, etc. Therefore, the coordinate conversion unit 115 of the display editing device 1-1 b
  • the second graphic information is calculated based on the information of the positional relationship between the camera 2 and the camera 101 and the positional relationship between the projector 2 and the camera 101.
  • the coordinate conversion unit 115 can calculate the second graphic information as long as the positional relationship between the camera 101 and the object is known. Therefore, in this case, it is sufficient that the positional relationship acquisition unit 113 acquire the positional relationship between the camera 101 and the target object, and the coordinate conversion unit 115 calculates the positional relationship information between the camera 101 and the target object.
  • the second graphic information may be calculated from the first graphic information.
  • the present invention allows free combination of each embodiment, or modification of any component of each embodiment, or omission of any component in each embodiment. .
  • graphic information for instructing the drawing apparatus to draw so that a desired figure can be visually recognized when the viewer looks at the figure drawn on the object from the non-facing position. Since editing is possible, the present invention can be applied to a display editing apparatus or the like for editing graphic information for instructing drawing to draw a figure on an arbitrary target object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

This display editing apparatus is provided with: a captured image acquisition unit (111) that acquires an image obtained by imaging an object by a camera (101); a figure acceptance unit (112) that accepts first figure information about a first figure drawn on the image; a positional relationship acquisition unit (113) that acquires the positional relationship between the object and the camera when the image is captured; a coordinate conversion unit (115) that calculates second figure information on the basis of the positional relationship acquired by the positional relationship acquisition unit (113) and the first figure information; and an instruction unit (116) that outputs the second figure information calculated by the coordinate conversion unit (115) to a drawing device that draws on the object.

Description

表示編集装置、表示編集方法、および、表示編集プログラムDisplay editing apparatus, display editing method, and display editing program
 この発明は、対象物上への描画を行う描画装置のための図形情報を算出する表示編集装置に関するものである。 The present invention relates to a display editing apparatus that calculates graphic information for a drawing apparatus that performs drawing on an object.
 従来、対象物の表面上(以下、「対象物上」ともいう。)に、何らかの制御に応じて、図形の描画を行う技術が知られている。このように対象物上に描画された図形は、その図形を視認する者(以下「視認者」という。)が、その図形が描画される対象物の表面と正対した位置(以下「正対位置」という。)からその図形を見る場合に限らず、視認者が当該正対位置からずれた位置(以下「非正対位置」という。)からその図形を見る場合に、所望の形状の図形(以下、単に「所望の図形」という。)を視認できるように、描画される場合もある。
 例えば、特許文献1には、表示するべき画像を取得する取得手段と、前記取得手段により得られた前記表示するべき画像を表示面に表示する表示手段と、前記表示するべき画像に対してOSD(On Screen Display)画像を重畳する指示が入力されると、前記指示を入力した操作者の位置に応じて、形状の異なるOSD画像を前記表示するべき画像に重畳し、前記表示面に表示するように前記表示手段を制御する制御手段とを有する技術が開示されている。
Conventionally, there is known a technique of drawing a figure on the surface of an object (hereinafter, also referred to as an “object”) according to some control. The figure drawn on the object in this manner is the position at which the person who visually recognizes the figure (hereinafter referred to as the "viewer") faces the surface of the object on which the figure is drawn (hereinafter If you look at the figure from a position (hereinafter referred to as “non-facing position”) where the viewer deviates from the facing position (not just when viewing the figure from It may be drawn so as to be visible (hereinafter simply referred to as "desired figure").
For example, in Patent Document 1, an acquisition unit for acquiring an image to be displayed, a display unit for displaying the image to be displayed obtained by the acquisition unit on a display surface, and an OSD for the image to be displayed (On Screen Display) When an instruction to superimpose an image is input, an OSD image having a different shape is superimposed on the image to be displayed according to the position of the operator who has input the instruction, and is displayed on the display surface. And a control means for controlling the display means.
特開2013-83755号公報JP, 2013-83755, A
 特許文献1に開示されているような技術によると、表示面に対して非正対位置にいる操作者が、所望の図形として、例えば、長方形のOSD画像を視認することができる。
 しかしながら、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形を視認できるように、描画装置に描画を指示するための図形情報を編集するにあたっては、当該図形情報を編集する者が、非正対位置から図形を視認した場合の図形を想像しながら編集することが必要であり、このような編集作業は困難であるという課題があった。
According to the technology disclosed in Patent Document 1, an operator who is at a non-facing position with respect to the display surface can visually recognize, for example, a rectangular OSD image as a desired figure.
However, when editing the graphic information for instructing the drawing apparatus to draw so that the viewer can visually recognize the desired figure when viewing the figure drawn on the object from the non-facing position, the figure It is necessary for a person editing the information to edit while imagining a figure when the figure is visually recognized from the non-facing position, and there is a problem that such editing work is difficult.
 この発明は、上記のような課題を解決するためになされたもので、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形を視認できるように、描画装置に描画を指示するための図形情報を編集することができる表示編集装置、表示編集方法、および、表示編集プログラムを提供することを目的とする。 The present invention has been made to solve the above-described problems, and a drawing apparatus that allows a viewer to visually recognize a desired figure when looking at a figure drawn on an object from a non-facing position. It is an object of the present invention to provide a display editing apparatus, a display editing method, and a display editing program capable of editing graphic information for instructing drawing.
 この発明に係る表示編集装置は、カメラが対象物を撮像した画像を取得する撮像画像取得部と、画像上に描画された第1の図形に関する第1の図形情報を受け付ける図形受付部と、画像が撮像された際の対象物とカメラとの位置関係を取得する位置関係取得部と、位置関係取得部が取得した位置関係と、第1の図形情報とから、第2の図形情報を算出する座標変換部と、座標変換部が算出した第2の図形情報を、対象物上への描画を行う描画装置に出力する指示部とを備えたものである。 A display editing apparatus according to the present invention includes: a captured image acquisition unit that acquires an image obtained by capturing an object by a camera; a graphic reception unit that receives first graphic information related to a first graphic drawn on the image; The second graphic information is calculated from the positional relationship acquisition unit that acquires the positional relationship between the object and the camera when the image is captured, the positional relationship acquired by the positional relationship acquisition unit, and the first graphic information. A coordinate conversion unit, and an instruction unit for outputting the second graphic information calculated by the coordinate conversion unit to a drawing device for drawing on an object.
 この発明によれば、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形を視認できるように、描画装置に描画を指示するための図形情報を編集することができる。 According to the present invention, the graphic information for instructing the drawing apparatus to draw is edited so that a desired figure can be visually recognized when the viewer looks at the figure drawn on the object from the non-facing position. Can.
実施の形態1における表示編集システムの構成例を示す図である。FIG. 1 is a diagram showing an exemplary configuration of a display editing system according to a first embodiment. 実施の形態1の表示編集装置の構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of a display editing device of a first embodiment. 実施の形態1のプロジェクタの構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of a projector of a first embodiment. 実施の形態1の表示編集装置の動作を説明するフローチャートである。5 is a flowchart for explaining the operation of the display editing apparatus of the embodiment 1; 実施の形態1において、表示編集装置の利用シーンの一例を説明する図である。In Embodiment 1, it is a figure explaining an example of the use scene of a display editing apparatus. 図6A,図6Bは、この発明の実施の形態1に係る表示編集装置のハードウェア構成の一例を示す図である。6A and 6B are diagrams showing an example of the hardware configuration of the display / editing apparatus according to Embodiment 1 of the present invention. 実施の形態2の表示編集装置の構成例を示すブロック図である。FIG. 13 is a block diagram showing an example of configuration of a display / editing apparatus of a second embodiment. 実施の形態2の表示編集装置の動作を説明するフローチャートである。FIG. 13 is a flow chart for explaining the operation of the display editing apparatus of the embodiment 2; FIG. 実施の形態2において、図形補完部が表示させた、ガイド、および、補完後の第1の図形が表示された表示部の画面の一例を示す図である。In Embodiment 2, it is a figure which shows an example of the screen of the display part on which the guide and the 1st figure after complementation which the figure complementation part was displayed were displayed. 実施の形態3の表示編集装置の構成例を示すブロック図である。FIG. 18 is a block diagram showing an example of configuration of a display / editing apparatus of a third embodiment.
 以下、この発明の実施の形態について、図面を参照しながら詳細に説明する。
実施の形態1.
 表示編集装置は、視認者が描画装置によって対象物上に描画される図形を非正対位置から見る際に、所望の図形を視認できるように、描画装置に対して描画を指示するための図形情報を編集する。なお、この明細書でいう「図形」には、単なる幾何学的な模様のみならず、文字、記号等も含まれるものとする。また、図形は、静止画であっても、動画であってもよい。
 以下の説明においては、一例として、表示編集装置は、カメラとタッチパネルとを備えたタブレットPC(Personal Computer)に搭載されるものする。また、描画装置は、自動車に取り付けられたプロジェクタであるものとする。プロジェクタは、光を投影することで、対象物上に図形を描画する投影機である。また、対象物上に描画される図形は、静止画であり、対象物とは、プロジェクタにより光が投影される投影面である路面であるものとする。また、想定される視認者は、当該路面の周辺の路上にいる者であるとする。
 実施の形態1における表示編集装置を使用するユーザは、路面周辺の路上にいる視認者が、プロジェクタによって当該路面上に投影される静止画を見る際に視認できる所望の形状の静止画(以下、単に「所望の静止画」という。)を、タブレットPCを用いて描画し、表示編集装置は、当該ユーザが描画した静止画に基づいて、プロジェクタに投影を指示するための図形情報を編集するものとする。ユーザがタブレットPCを用いて静止画を描画する際、タブレットPCには、カメラによって予め視認者の視点から路面が撮像されて得られた撮像画像が表示されており、ユーザは、視認者の視点から見た場合の所望の静止画を、その見た目どおりの形状のまま、タブレットPCに表示された撮像画像上の所望の位置に描画することができる。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
Embodiment 1
The display editing apparatus is a graphic for instructing the drawing apparatus to draw so that a desired figure can be visually recognized when the viewer looks at the figure drawn on the object by the drawing apparatus from the non-facing position. Edit the information The term "graphic" in this specification includes not only simple geometrical patterns but also characters, symbols and the like. The graphic may be a still image or a moving image.
In the following description, as an example, the display editing apparatus is mounted on a tablet PC (Personal Computer) including a camera and a touch panel. Further, the drawing device is assumed to be a projector attached to a car. The projector is a projector that draws a figure on an object by projecting light. Further, the figure drawn on the object is a still image, and the object is a road surface which is a projection surface on which light is projected by the projector. Further, it is assumed that the assumed viewer is a person on the road around the road surface.
The user using the display editing apparatus according to the first embodiment is a still image of a desired shape (hereinafter referred to as “visible image”) that can be viewed when a viewer on the road around the road looks at the still picture projected on the road by the projector. What is simply referred to as “desired still image”) is drawn using a tablet PC, and the display editing apparatus edits the graphic information for instructing the projector to project based on the still image drawn by the user I assume. When the user draws a still image using a tablet PC, a captured image obtained by capturing an image of the road surface from the viewpoint of the viewer in advance is displayed on the tablet PC by the camera, and the user sees the viewer's viewpoint A desired still image as viewed from the point of view can be drawn at a desired position on the captured image displayed on the tablet PC, with the shape as it looks.
 図1は、実施の形態1における表示編集システムの構成例を示す図である。
 タブレットPC100は、背面に備え付けられたカメラ101(後述する)によって撮像された撮像画像を、表示部102(後述する)に表示させる。表示部102は、タッチパネル式のディスプレイである。
 ユーザは、指、または、いわゆるスタイラスを用いて、表示部102にタッチすることによって、所望の静止画を描画する。表示編集装置1は、ユーザが表示部102に描画することによって入力した静止画に基づき、図形情報を編集してプロジェクタ2に出力し、プロジェクタ2は、当該図形情報が示す静止画を路面に対して投影する。
 タブレットPC100とプロジェクタ2とは、有線、または、無線を問わず、何らかの手段で通信可能となっていればよい。
FIG. 1 is a diagram showing a configuration example of a display editing system according to the first embodiment.
The tablet PC 100 causes the display unit 102 (described later) to display an image captured by a camera 101 (described later) provided on the back surface. The display unit 102 is a touch panel display.
The user draws a desired still image by touching the display unit 102 with a finger or a so-called stylus. The display editing apparatus 1 edits the graphic information based on the still image input by the user drawing on the display unit 102 and outputs the graphic information to the projector 2, and the projector 2 applies the still image indicated by the graphic information to the road surface. Project.
The tablet PC 100 and the projector 2 may be able to communicate with each other by any means, whether wired or wireless.
 図2は、実施の形態1の表示編集装置1の構成例を示すブロック図である。
 図2に示すように、表示編集装置1は、撮像画像取得部111と、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、指示部116と、表示制御部117と、音声出力制御部118と、記憶部119とを備える。
FIG. 2 is a block diagram showing an example of the configuration of the display editing apparatus 1 according to the first embodiment.
As shown in FIG. 2, the display editing apparatus 1 includes a captured image acquisition unit 111, a figure reception unit 112, a positional relationship acquisition unit 113, a figure position determination unit 114, a coordinate conversion unit 115, and an instruction unit 116. , A display control unit 117, an audio output control unit 118, and a storage unit 119.
 撮像画像取得部111は、カメラ101が路面を撮像した撮像画像を取得する。
 撮像画像取得11は、取得した撮像画像を位置関係取得部113、および、表示制御部117に出力する。以下、撮像画像取得部111が取得し、位置関係取得部113、および、表示制御部117に出力する撮像画像を、単に画像ともいうものとする。
The captured image acquisition unit 111 acquires a captured image in which the camera 101 captures a road surface.
The captured image acquisition 11 outputs the acquired captured image to the positional relationship acquisition unit 113 and the display control unit 117. Hereinafter, the captured image acquired by the captured image acquisition unit 111 and output to the positional relationship acquisition unit 113 and the display control unit 117 is simply referred to as an image.
 図形受付部112は、表示部102に表示にされた画像上で描画された図形(以下「第1の図形」という。)に関する図形情報(以下「第1の図形情報」という。)を受け付ける。
 ユーザは、表示部102に画像が表示されると、例えば、タッチパネルを操作して、当該画像上に、所望の静止画として、第1の図形を描画する。図形受付部112は、ユーザが描画した第1の図形に関する第1の図形情報を受け付ける。
 図形受付部112は、受け付けた第1の図形情報を、図形位置判定部114に出力する。
The graphic receiving unit 112 receives graphic information (hereinafter referred to as “first graphic information”) related to a graphic drawn on the image displayed on the display unit 102 (hereinafter referred to as “first graphic”).
When the image is displayed on the display unit 102, the user operates, for example, the touch panel to draw a first figure on the image as a desired still image. The graphic receiving unit 112 receives first graphic information on a first graphic drawn by the user.
The figure reception unit 112 outputs the received first figure information to the figure position determination unit 114.
 位置関係取得部113は、撮像画像取得部111が取得した撮像画像が撮像された際の、路面とカメラ101の位置関係を取得する。また、位置関係取得部113は、撮像画像取得部111が取得した撮像画像が撮像された際の、プロジェクタ2とカメラ101の位置関係を取得する。
 位置関係取得部113は、取得した位置関係の情報を、図形位置判定部114に出力する。
The positional relationship acquisition unit 113 acquires the positional relationship between the road surface and the camera 101 when the captured image acquired by the captured image acquisition unit 111 is captured. Further, the positional relationship acquisition unit 113 acquires the positional relationship between the projector 2 and the camera 101 when the captured image acquired by the captured image acquisition unit 111 is captured.
The positional relationship acquisition unit 113 outputs the acquired positional relationship information to the figure position determination unit 114.
 図形位置判定部114は、位置関係取得部113から出力された、路面とカメラ101の位置関係の情報、および、プロジェクタ2とカメラ101の位置関係の情報と、図形受付部112から出力された第1の図形情報とに基づき、第1の図形が、路面上のどの位置に対応付けられるかを判定する。
 このとき、第1の図形が路面外であった場合は、図形位置判定部114は、第1の図形が路面に投影不可であることを示すエラー情報を、表示制御部117、または、音声出力制御部118に出力する。
 第1の図形が路面上であった場合、図形位置判定部114は、第1の図形情報を、路面とカメラ101の位置関係の情報、および、プロジェクタ2とカメラ101の位置関係の情報とともに、座標変換部115に出力する。
The figure position determination unit 114 outputs the information on the positional relationship between the road surface and the camera 101, the information on the positional relationship between the projector 2 and the camera 101, which is output from the positional relationship acquisition unit 113, and Based on the graphic information of 1, it is determined to which position on the road surface the first graphic is associated.
At this time, when the first figure is outside the road surface, the figure position determination unit 114 outputs the error information indicating that the first figure can not be projected onto the road surface, the display control unit 117 or the voice output. It is output to the control unit 118.
When the first figure is on the road surface, the figure position determination unit 114 combines the first figure information with the information on the positional relationship between the road surface and the camera 101, and the information on the positional relationship between the projector 2 and the camera 101. It is output to the coordinate conversion unit 115.
 座標変換部115は、図形位置判定部114から第1の図形情報等が出力されると、路面とカメラ101との位置関係の情報と、プロジェクタ2とカメラ101との位置関係の情報と、第1の図形情報とから、第1の図形情報の座標を変換し、プロジェクタ2が路面に投影すべき図形(以下、「第2の図形」とする。)に関する図形情報(以下、「第2の図形情報」とする。)を算出する。第2の図形は、撮像画像取得部111が取得した撮像画像が撮像された際のカメラ位置から見て、前記第1の図形が形を変えることなく視認者に視認されるような図形を対象物上に描画するために、プロジェクタ2が投影すべき図形である。
 座標変換部115は、算出した第2の図形情報を、指示部116に出力する。
When the first graphic information and the like are output from the graphic position determination unit 114, the coordinate conversion unit 115 outputs information on the positional relationship between the road surface and the camera 101, information on the positional relationship between the projector 2 and the camera 101, The coordinate of the first graphic information is converted from the graphic information of 1, and the graphic information (hereinafter, “second”) regarding the graphic to be projected onto the road surface by the projector 2 (hereinafter referred to as “second graphic”) Calculate “graphic information”. The second figure is a figure that is viewed by the viewer without changing the shape of the first figure as viewed from the camera position when the captured image acquired by the captured image acquisition unit 111 is captured. It is a figure to be projected by the projector 2 in order to draw on an object.
The coordinate conversion unit 115 outputs the calculated second graphic information to the instruction unit 116.
 指示部116は、路面上への図形の投影を行うプロジェクタ2に、座標変換部115が算出した第2の図形情報を出力し、プロジェクタ2は、第2の図形情報が示す第2の図形を、路面上へ投影する。 The instruction unit 116 outputs the second graphic information calculated by the coordinate conversion unit 115 to the projector 2 that projects the graphic on the road surface, and the projector 2 outputs the second graphic indicated by the second graphic information. , Project on the road surface.
 表示制御部117は、撮像画像取得部111が取得した画像を、表示部102に表示させる。また、表示制御部117は、図形位置判定部114から、第1の図形が路面に投影不可であることを示すエラー情報が出力されると、表示部102に、エラーメッセージ等、エラー情報に応じた情報を表示させる。 The display control unit 117 causes the display unit 102 to display the image acquired by the captured image acquisition unit 111. In addition, when error information indicating that the first figure can not be projected onto the road surface is output from the graphic position determination unit 114, the display control unit 117 causes the display unit 102 to respond to the error information such as an error message. Display information.
 音声出力制御部118は、図形位置判定部114から、第1の図形が路面に投影不可であることを示すエラー情報が出力されると、音声出力装置103に、ブザー音、または、エラーメッセージ等、エラー情報に応じた音、または、音声を出力させる。この実施の形態1において、音声出力装置103は、タブレットPC100が備えるスピーカである。 When the voice output control unit 118 outputs error information indicating that the first figure can not be projected onto the road surface from the figure position determination unit 114, the voice output device 103 generates a buzzer sound, an error message, or the like. , Output sound or sound according to the error information. In the first embodiment, the audio output device 103 is a speaker provided in the tablet PC 100.
 記憶部119は、プロジェクタ2の位置、または、姿勢等、プロジェクタに関する情報を記憶する。また、記憶部119は、凹凸等、路面の形状に関する情報を記憶する。
 記憶部119が記憶させる情報は、予めユーザ等により設定されていてもよいし、または、プロジェクタ2の位置等に関しては、後述のとおり、表示編集装置1が使用される際に当該表示編集装置1自体が検出した情報に基づき設定されてもよい。
 なお、実施の形態1では、記憶部119は、表示編集装置1が備えるものとしたが、これに限らず、記憶部119は、表示編集装置1の外部の、表示編集装置1が参照可能な場所に備えられるようにしてもよい。
The storage unit 119 stores information related to the projector, such as the position or posture of the projector 2. The storage unit 119 also stores information on the shape of the road surface, such as unevenness.
The information to be stored in the storage unit 119 may be set in advance by the user or the like, or with regard to the position of the projector 2 or the like, when the display editing device 1 is used as described later. It may be set based on the information detected by itself.
In the first embodiment, the storage unit 119 is included in the display editing apparatus 1. However, the present invention is not limited to this. The storage unit 119 can refer to the display editing apparatus 1 outside the display editing apparatus 1. It may be prepared in place.
 図3は、実施の形態1のプロジェクタ2の構成例を示すブロック図である。
 図3に示すように、プロジェクタ2は、描画指示受付部21と、描画部22とを備える。
 描画指示受付部21は、表示編集装置1から出力された第2の図形情報を受け付ける。
 描画指示受付部21は、受け付けた第2の図形情報を、描画部22に出力する。
 描画部22は、描画指示受付部21が受け付けた第2の図形情報が示す第2の図形を、路面に投影する。
FIG. 3 is a block diagram showing a configuration example of the projector 2 according to the first embodiment.
As shown in FIG. 3, the projector 2 includes a drawing instruction receiving unit 21 and a drawing unit 22.
The drawing instruction receiving unit 21 receives the second graphic information output from the display editing apparatus 1.
The drawing instruction receiving unit 21 outputs the received second graphic information to the drawing unit 22.
The drawing unit 22 projects the second graphic indicated by the second graphic information received by the drawing instruction receiving unit 21 on the road surface.
 次に、実施の形態1の表示編集装置1の動作について説明する。
 図4は、実施の形態1の表示編集装置1の動作を説明するフローチャートである。
 ここで、図5は、実施の形態1において、表示編集装置1の利用シーンの一例を説明する図である。
 図5に示すように、ユーザは、タブレットPC100を手に取り、タブレットPC100越しに、プロジェクタ2、当該プロジェクタ2が取り付けられた自動車51、および、その周辺を観察する。さらに、ユーザは、タブレットPC100のカメラを用いて、視認者がプロジェクタ2により投影される静止画を視認することが想定される視点位置から、当該静止画が描画されるべき路面と、プロジェクタ2とを少なくとも含む画像を撮像する。
 プロジェクタ2は、路面に対して任意の静止画を照射できるような状態となっている。
 以上のような状態であることを前提に、表示編集装置1は、図4のフローチャートで説明するような動作を行う。
Next, the operation of the display editing apparatus 1 according to the first embodiment will be described.
FIG. 4 is a flowchart for explaining the operation of the display editing device 1 according to the first embodiment.
Here, FIG. 5 is a diagram for explaining an example of a use scene of the display editing apparatus 1 in the first embodiment.
As shown in FIG. 5, the user picks up the tablet PC 100 and observes the projector 2, the automobile 51 to which the projector 2 is attached, and the periphery thereof through the tablet PC 100. Furthermore, the user uses the camera of the tablet PC 100 to, from the viewpoint position where the viewer views the still image projected by the projector 2, the road surface on which the still image is to be drawn, the projector 2, and To capture an image at least including:
The projector 2 is in a state capable of emitting an arbitrary still image to the road surface.
Assuming that the state is as described above, the display editing device 1 performs the operation as described in the flowchart of FIG. 4.
 図形受付部112は、第1の図形情報を受け付けるまで待機し(ステップST401の“NO”の場合)、第1の図形情報を受け付けると、受け付けた第1の図形情報を図形位置判定部114に出力する。
 なお、図形受付部112は、ユーザがタッチパネルを操作して第1の図形を描画し始めると、随時、描画された第1の図形情報を受け付け、以降、少しでもユーザによって第1の図形が描画されると、当該第1の図形が描画される毎に、図形受付部112は、第1の図形情報を受け付け続ける。
The figure reception unit 112 waits until the first figure information is received (in the case of “NO” in step ST401), and when the first figure information is received, the received first figure information is sent to the figure position determination unit 114. Output.
When the user operates the touch panel to start drawing the first graphic, the graphic receiving unit 112 receives the drawn first graphic information at any time, and thereafter, the user draws the first graphic even by a small amount. Then, each time the first graphic is drawn, the graphic receiving unit 112 continues to receive the first graphic information.
 位置関係取得部113は、路面とカメラ101の位置関係を取得する。また、位置関係取得部113は、プロジェクタ2とカメラ101の位置関係を取得する(ステップST402)。
 具体的には、例えば、カメラ101の撮像範囲内の空間における所定の位置には予めマーカが設置されており、位置関係取得部113は、映像に写っている当該マーカを検出することで、路面とカメラ101、および、プロジェクタ2とカメラ101の三次元の位置関係を取得する。なお、これは一例に過ぎず、位置関係取得部113は、その他の既存の方法を利用して、路面とカメラ101、および、プロジェクタ2とカメラ101の位置関係を取得してもよい。例えば、位置関係取得部113は、空間内の特徴量を用いて位置関係を取得してもよいし、周囲の3次元空間を認識する既存のアプリケーション等を用いて位置関係を取得してもよい。また、タブレットPC100に三次元トラッカを取り付けておくようにし、位置関係取得部113は、当該三次元トラッカの測定情報から、位置関係を取得してもよい。プロジェクタ2の自動車への取り付け位置と、取り付け姿勢とに関する情報、および、路面が凹凸を含む場合の路面の形状の情報等は、予め表示編集装置1が記憶している。
 位置関係取得部113は、取得した位置関係の情報を、図形位置判定部114に出力する。
The positional relationship acquisition unit 113 acquires the positional relationship between the road surface and the camera 101. Further, the positional relationship acquisition unit 113 acquires the positional relationship between the projector 2 and the camera 101 (step ST402).
Specifically, for example, a marker is installed in advance at a predetermined position in the space within the imaging range of the camera 101, and the positional relationship acquisition unit 113 detects the marker in the video to detect the road surface. And the camera 101, and the three-dimensional positional relationship between the projector 2 and the camera 101 is acquired. Note that this is merely an example, and the positional relationship acquisition unit 113 may acquire the positional relationship between the road surface and the camera 101, and between the projector 2 and the camera 101, using another existing method. For example, the positional relationship acquisition unit 113 may obtain the positional relationship using feature quantities in the space, or may obtain the positional relationship using an existing application or the like that recognizes the surrounding three-dimensional space. . Alternatively, the three-dimensional tracker may be attached to the tablet PC 100, and the positional relationship acquisition unit 113 may acquire the positional relationship from the measurement information of the three-dimensional tracker. The display editing apparatus 1 stores in advance information on the mounting position of the projector 2 on the vehicle and the mounting attitude, and information on the shape of the road surface when the road surface includes unevenness and the like.
The positional relationship acquisition unit 113 outputs the acquired positional relationship information to the figure position determination unit 114.
 図形位置判定部114は、位置関係取得部113から出力された、路面とカメラ101との位置関係の情報、および、プロジェクタ2とカメラ101との位置関係の情報と、図形受付部112から出力された第1の図形情報とに基づき、第1の図形が、路面上のどの位置に対応付けられるかを判定する(ステップST403)。 The graphic position determination unit 114 is output from the graphic reception unit 112, the information of the positional relationship between the road surface and the camera 101, the information of the positional relationship between the projector 2 and the camera 101, which is output from the positional relationship acquisition unit 113. Based on the first graphic information, it is determined which position on the road surface the first graphic is associated with (step ST403).
 そして、図形位置判定部114は、第1の図形が、路面上であるかどうかを判定し(ステップST404)、路面外であった場合(ステップST404の“NO”の場合)は、第1の図形が路面に投影不可であることを示すエラー情報を表示制御部117、または、音声出力制御部118に出力する(ステップST407)。
 表示制御部117は、図形位置判定部114からエラー情報が出力されると、表示部102に、エラーメッセージ等、エラー情報に応じた情報を表示させる。また、音声出力制御部118は、図形位置判定部114からエラー情報が出力されると、音声出力装置103に、ブザー音、または、エラーメッセージ等、エラー情報に応じた音、または、音声を出力させる。なお、図形位置判定部114は、エラー情報を、表示制御部117、および、音声出力制御部118の両方に出力するようにしてもよい。
Then, the figure position determination unit 114 determines whether the first figure is on the road surface (step ST404), and if it is outside the road surface (in the case of “NO” in step ST404), the first Error information indicating that the figure can not be projected onto the road surface is output to the display control unit 117 or the voice output control unit 118 (step ST407).
When the graphic position determination unit 114 outputs the error information, the display control unit 117 causes the display unit 102 to display information corresponding to the error information, such as an error message. Further, when the error information is output from the figure position determination unit 114, the voice output control unit 118 outputs a sound or voice according to the error information, such as a buzzer sound or an error message, to the voice output device 103. Let The figure position determination unit 114 may output the error information to both the display control unit 117 and the audio output control unit 118.
 一方、第1の図形が路面上であった場合(ステップST404の“YES”の場合)、図形位置判定部114は、第1の図形情報を、路面とカメラ101の位置関係の情報、および、プロジェクタ2とカメラ101の位置関係の情報とともに、座標変換部115に出力する。 On the other hand, when the first figure is on the road surface (in the case of “YES” in step ST404), the figure position determination unit 114 sets the first figure information to the information on the positional relationship between the road surface and the camera 101; The coordinate conversion unit 115 outputs information on the positional relationship between the projector 2 and the camera 101.
 座標変換部115は、ステップST404において図形位置判定部114が出力した、路面とカメラ101との位置関係の情報と、プロジェクタ2とカメラ101との位置関係の情報と、第1の図形情報から、プロジェクタ2が路面に投影すべき第2の図形に関する第2の図形情報を算出する(ステップST405)。
 座標変換部115は、算出した第2の図形情報を、指示部116に出力する。
The coordinate conversion unit 115 uses the information on the positional relationship between the road surface and the camera 101, the information on the positional relationship between the projector 2 and the camera 101, and the first graphical information output by the graphic position determination unit 114 in step ST404. The projector 2 calculates second graphic information on a second graphic to be projected on the road surface (step ST405).
The coordinate conversion unit 115 outputs the calculated second graphic information to the instruction unit 116.
 指示部116は、プロジェクタ2に、ステップST405において座標変換部115が算出した第2の図形情報を出力する(ステップST406)。
 プロジェクタ2では、描画指示受付部21が、指示部116から出力された第2の図形情報を受け付け、描画部22が、第2の図形情報が示す第2の図形を路面に投影する。
The instructing unit 116 outputs the second graphic information calculated by the coordinate conversion unit 115 in step ST405 to the projector 2 (step ST406).
In the projector 2, the drawing instruction receiving unit 21 receives the second graphic information output from the instructing unit 116, and the drawing unit 22 projects the second graphic indicated by the second graphic information on the road surface.
 図形受付部112は、引き続き、第1の図形情報を受け付けたかどうかを判定し(ステップST408)、第1の図形情報を受け付けた場合(ステップST408の“YES”の場合)、ステップST402に戻り、以降の処理を繰り返す。
 第1の図形情報を受け付けない場合(ステップST408の“NO”の場合)、処理終了する。第1の図形情報を受け付けない場合とは、例えば、新たな第1の図形情報が所定時間入力されなかった場合、または、ユーザが第1の図形情報の入力処理を終了する操作を行った場合である。
The figure reception unit 112 subsequently determines whether or not the first figure information is received (step ST408), and when the first figure information is received (in the case of “YES” in step ST408), the process returns to step ST402. Repeat the subsequent processing.
If the first graphic information is not received ("NO" in step ST408), the process ends. The case where the first graphic information is not received means, for example, when new first graphic information is not inputted for a predetermined time, or when the user performs an operation of ending the input process of the first graphic information. It is.
 一般的に、例えば、プロジェクションマッピング等を作成するような状況においては、ユーザは、表示させたい図形等を、パソコン上で編集してプロジェクタで投影させる、という手順となる。パソコン上では、多くの場合、ユーザは、表示させる図形を正対した位置から見た状態で画像編集ソフト等を用いて編集することとなるが、作成した図形を投影させた際にどのような見た目になるかはユーザが想像するしかなく、例えば、非正対位置から見たときの見栄えを気にしながら編集することになり、ユーザにとって、図形を作成する作業は困難なものとなる。さらに、パソコンの画面上と、実空間上では、当然ながらスケールが異なるため、パソコンの画面上で表示される図形を実際に表示させてみると、ユーザが想定していたものと異なる見た目となるおそれもある。 Generally, for example, in a situation where a projection mapping or the like is created, the user has a procedure of editing a figure or the like to be displayed on a personal computer and projecting the figure or the like on a projector. On a personal computer, in many cases, a user edits a figure to be displayed using an image editing software etc. in a state where it is viewed from the directly facing position, but when projecting a created figure, what kind of It can only be imagined by the user if it looks like, for example, editing is performed while minding the appearance when viewed from the non-facing position, making the task of creating a figure difficult for the user. Furthermore, since the scale is naturally different on the screen of the personal computer and on the real space, when the figure displayed on the screen of the personal computer is actually displayed, it looks different from what the user expected. There is also a fear.
 これに対し、実施の形態1では、上述のとおり、ユーザは、視認者が対象物上に描画された図形を非正対位置から見たときに、視認者に視認させたい所望の図形を、そのまま第1の図形としてタブレットPC100に描画することができる。その結果、ユーザにとっては、図形の編集が容易になる上、編集された図形(第2の図形)の品質も向上する。さらに、タブレットPC100からの入力を、実際に図形が投影されることとなる対象物を撮像した画像上に行うことができるため、ユーザは、実際のスケール感も確認しながら作業できることとなり、プロジェクタ2から投影させたい図形のイメージがずれることもなくなり、効率的に品質の高い図形を投影させるようにすることができる。 On the other hand, in the first embodiment, as described above, when the viewer looks at the figure drawn on the object from the non-facing position, the user desires the desired figure to be made visible by the viewer, It is possible to draw on the tablet PC 100 as the first figure as it is. As a result, it becomes easy for the user to edit the figure, and the quality of the edited figure (second figure) is also improved. Furthermore, since the input from the tablet PC 100 can be performed on an image obtained by imaging an object on which a graphic is to be actually projected, the user can also work while confirming the actual sense of scale. The image of the figure desired to be projected does not shift, and high-quality figure can be efficiently projected.
 図6A,図6Bは、この発明の実施の形態1に係る表示編集装置1のハードウェア構成の一例を示す図である。
 この発明の実施の形態1において、撮像画像取得部111と、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、表示制御部117と、音声出力制御部118の各機能は、処理回路601により実現される。すなわち、表示編集装置1は、取得した撮像画像と、ユーザから受け付けた第1の図形情報とに基づき、第1の図形を第の図形に変換し、投影させる制御を行うための処理回路601を備える。
 処理回路601は、図6Aに示すように専用のハードウェアであっても、図6Bに示すようにメモリ606に格納されるプログラムを実行するCPU(Central Processing Unit)605であってもよい。
6A and 6B are diagrams showing an example of the hardware configuration of the display editing device 1 according to the first embodiment of the present invention.
In the first embodiment of the present invention, the captured image acquisition unit 111, the figure reception unit 112, the positional relationship acquisition unit 113, the figure position determination unit 114, the coordinate conversion unit 115, the display control unit 117, and audio output. Each function of the control unit 118 is realized by the processing circuit 601. That is, the display editing apparatus 1 converts the first graphic into the second graphic based on the acquired captured image and the first graphic information received from the user, and performs processing for performing control to project the first graphic. Prepare.
The processing circuit 601 may be dedicated hardware as shown in FIG. 6A or may be a CPU (Central Processing Unit) 605 that executes a program stored in the memory 606 as shown in FIG. 6B.
 処理回路601が専用のハードウェアである場合、処理回路601は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、またはこれらを組み合わせたものが該当する。 When the processing circuit 601 is dedicated hardware, the processing circuit 601 may be, for example, a single circuit, a complex circuit, a programmed processor, a parallel programmed processor, an application specific integrated circuit (ASIC), an FPGA (field-programmable) Gate Array) or a combination thereof is applicable.
 処理回路601がCPU605の場合、撮像画像取得部111と、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、指示部116と、表示制御部117と、音声出力制御部118の各機能は、ソフトウェア、ファームウェア、または、ソフトウェアとファームウェアとの組み合わせにより実現される。すなわち、撮像画像取得部111と、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、表示制御部117と、音声出力制御部118は、HDD(Hard Disk Drive)602、メモリ606等に記憶されたプログラムを実行するCPU605、システムLSI(Large-Scale Integration)等の処理回路により実現される。また、HDD602、メモリ606等に記憶されたプログラムは、撮像画像取得部111と、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、指示部116と、表示制御部117と、音声出力制御部118の手順や方法をコンピュータに実行させるものであるとも言える。ここで、メモリ606とは、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read-Only Memory)等の、不揮発性または揮発性の半導体メモリや、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)等が該当する。 When the processing circuit 601 is the CPU 605, the captured image acquisition unit 111, the figure reception unit 112, the positional relationship acquisition unit 113, the figure position determination unit 114, the coordinate conversion unit 115, the instruction unit 116, and the display control unit 117. Each function of the audio output control unit 118 is realized by software, firmware, or a combination of software and firmware. That is, the captured image acquisition unit 111, the graphic reception unit 112, the positional relationship acquisition unit 113, the graphic position determination unit 114, the coordinate conversion unit 115, the display control unit 117, and the audio output control unit 118 A hard disk drive (602), a CPU 605 that executes a program stored in a memory 606 or the like, and a processing circuit such as a system LSI (Large-Scale Integration). The programs stored in the HDD 602, the memory 606, and the like are the captured image acquisition unit 111, the figure reception unit 112, the positional relationship acquisition unit 113, the figure position determination unit 114, the coordinate conversion unit 115, and the instruction unit 116. It can also be said that the computer causes the computer to execute the procedures and methods of the display control unit 117 and the audio output control unit 118. Here, the memory 606 is, for example, a non-volatile memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM), and an electrically erasable programmable read only memory (EEPROM). Semiconductor memory, magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc), and the like.
 なお、撮像画像取得部111と、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、指示部116と、表示制御部117と、音声出力制御部118の各機能について、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしてもよい。例えば、撮像画像取得部111については専用のハードウェアとしての処理回路601でその機能を実現し、図形受付部112と、位置関係取得部113と、図形位置判定部114と、座標変換部115と、指示部116と、表示制御部117と、音声出力制御部118については処理回路がメモリ606に格納されたプログラムを読み出して実行することによってその機能を実現することが可能である。
 記憶部119は、例えば、HDD602を使用する。なお、これは一例にすぎず、記憶部119は、DVD、メモリ606等によって構成されるものであってもよい。
 また、表示編集装置1は、カメラ101、表示部102、または、プロジェクタ2等の外部機器との通信を行う入力インタフェース装置603、出力インタフェース装置604を有する。例えば、撮像画像取得部111は、カメラ101で撮像された撮像画像を、入力インタフェース装置603を利用して取得する。また、例えば、図形受付部112は、ユーザの入力操作による第1の図形情報を、入力インタフェース装置603を利用して取得する。また、例えば、指示部116は、第2の図形情報を、出力インタフェース装置604を利用してプロジェクタ2に送信する。
Note that the captured image acquisition unit 111, the graphic reception unit 112, the positional relationship acquisition unit 113, the graphic position determination unit 114, the coordinate conversion unit 115, the instruction unit 116, the display control unit 117, and the audio output control unit For each function of 118, a part may be realized by dedicated hardware and a part may be realized by software or firmware. For example, the processing circuit 601 as dedicated hardware realizes the function of the captured image acquisition unit 111, and the graphic reception unit 112, the positional relationship acquisition unit 113, the graphic position determination unit 114, and the coordinate conversion unit 115 The functions of the instruction unit 116, the display control unit 117, and the audio output control unit 118 can be realized by the processing circuit reading and executing a program stored in the memory 606.
The storage unit 119 uses, for example, the HDD 602. Note that this is merely an example, and the storage unit 119 may be configured by a DVD, a memory 606, and the like.
The display editing apparatus 1 further includes an input interface device 603 and an output interface device 604 that communicate with an external device such as the camera 101, the display unit 102, or the projector 2. For example, the captured image acquisition unit 111 acquires a captured image captured by the camera 101 using the input interface device 603. Also, for example, the graphic receiving unit 112 acquires first graphic information by an input operation of the user using the input interface device 603. Also, for example, the instruction unit 116 transmits the second graphic information to the projector 2 using the output interface device 604.
 以上のように、この実施の形態1によれば、表示編集装置1は、カメラ101が対象物を撮像した画像を取得する撮像画像取得部111と、画像上に描画された第1の図形に関する第1の図形情報を受け付ける図形受付部112と、画像が撮像された際の対象物とカメラとの位置関係を取得する位置関係取得部113と、位置関係取得部113が取得した位置関係と、第1の図形情報とから、第2の図形情報を算出する座標変換部115と、座標変換部115が算出した第2の図形情報を、対象物上への描画を行う描画装置に出力する指示部116とを備えるようにした。そのため、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形を認識できるように、描画装置に描画を指示するための図形情報を編集することができる。
 なお、この実施の形態1においては、対象物を撮影する際のカメラの位置は、任意の位置とすることができる。したがって、この実施の形態1は、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形が認識されるように、描画装置に描画を指示する場合に、特に効果的であるが、対象物上に描画される図形を、視認者が「正対位置」から見る際に所望の図形を認識させたい場合も含めて、描画装置に描画を指示するための図形情報を編集することができるものである。
As described above, according to the first embodiment, the display editing apparatus 1 relates to a captured image acquisition unit 111 that acquires an image obtained by capturing an object by the camera 101, and a first figure drawn on the image. The positional relationship acquisition unit 113 that acquires the positional relationship between the object when the image is captured and the camera, the positional relationship acquired by the positional relationship acquisition unit 113, and the graphic reception unit 112 that receives the first graphic information. A coordinate conversion unit 115 that calculates second graphic information from the first graphic information, and an instruction to output the second graphic information calculated by the coordinate conversion unit 115 to a drawing device that draws on the object And the unit 116. Therefore, it is possible to edit the graphic information for instructing the drawing apparatus to draw so that a desired figure can be recognized when the viewer looks at the figure drawn on the object from the non-facing position.
In the first embodiment, the position of the camera when photographing an object can be any position. Therefore, in the first embodiment, in particular, the drawing apparatus is instructed to draw so that a desired figure is recognized when the viewer looks at the figure drawn on the object from the non-facing position. Although effective, a figure for instructing the drawing device to draw a figure drawn on the object, including a case where the viewer wishes to recognize a desired figure when viewed from the “right-facing position” Information can be edited.
実施の形態2.
 実施の形態1では、ユーザは、図形が投影される対象物の表面を、非正対位置から見た状態で第1の図形を描画するため、多くの場合、プロジェクタ2から投影される第2の図形は、プロジェクタ2側または自動車側から見た場合には対称形とはならない。しかし、プロジェクタ2が路面に対して投影する図形の目的によっては、当該図形をプロジェクタ2または自動車側から見た場合に対称形とすることが望ましい場合がある。例えば、自動車に取り付けられたプロジェクタ2から投影する図形によって、自動車の周囲の歩行者等を視認者として想定し、当該歩行者等に自動車の進行方向を視認させる用途の場合、任意の位置の歩行者等が自動車の進行方向を正確に視認できるようにするためには、プロジェクタ2側または自動車側から見た場合に対称形となる図形を、プロジェクタ2から投影することが望ましい。
 実施の形態2では、ユーザが、図形が投影される対象物の表面を非正対位置から見た状態で第1の図形を描画する場合であっても、想定される視認者の位置から見て視認しやすく、かつ、プロジェクタ2側または自動車側から見て対称形となる所望の図形を、第2の図形としてプロジェクタから投影できるようにする実施の形態について説明する。
 なお、実施の形態2においても、実施の形態1同様、一例として、表示編集装置1a(後述する)は、タブレットPC100に搭載され、ユーザは、路面周辺の路上にいる視認者が、自動車に取り付けられたプロジェクタ2によって当該路面上に投影される静止画を見る際に視認できる所望の静止画を、タブレットPC100を用いて描画し、表示編集装置は、当該ユーザが描画した静止画に基づいて、プロジェクタ2に投影を指示するための図形情報を編集するものとする。
Second Embodiment
In the first embodiment, the user often draws the second figure projected from the projector 2 in order to draw the first figure in a state where the surface of the object on which the figure is projected is viewed from the non-facing position. The figure does not become symmetrical when viewed from the projector 2 side or the car side. However, depending on the purpose of the figure projected by the projector 2 on the road surface, it may be desirable to make the figure symmetrical when viewed from the projector 2 or the car side. For example, a pedestrian or the like around a car is assumed as a viewer by a figure projected from the projector 2 attached to the car, and in the case of an application that causes the pedestrian or the like to visually recognize the traveling direction of the car It is desirable to project from the projector 2 a figure that is symmetrical when viewed from the projector 2 side or the car side in order to enable the person etc. to accurately view the traveling direction of the car.
In the second embodiment, even when the user draws the first graphic while looking at the surface of the object on which the graphic is projected from the non-facing position, the user sees from the position of the assumed viewer. An embodiment will be described in which a desired figure which is easy to visually recognize and which is symmetrical when viewed from the projector 2 side or the car side can be projected from the projector as a second figure.
Also in the second embodiment, as in the first embodiment, as an example, the display editing apparatus 1a (described later) is mounted on the tablet PC 100, and the user is attached to a car by a viewer on the road around the road surface. A desired still image visible when viewing a still image projected on the road surface by the projector 2 is drawn using the tablet PC 100, and the display editing apparatus based on the still image drawn by the user It is assumed that graphic information for instructing the projector 2 to project is edited.
 実施の形態2の表示編集システムの構成は、実施の形態1において、図1を用いて説明した構成と同様であるため、重複した説明を省略する。また、実施の形態2のプロジェクタ2の構成は、実施の形態1において、図3を用いて説明した構成と同様であるため、重複した説明を省略する。 The configuration of the display editing system according to the second embodiment is the same as the configuration described with reference to FIG. 1 in the first embodiment, and therefore redundant description will be omitted. Further, the configuration of the projector 2 according to the second embodiment is the same as the configuration described with reference to FIG. 3 in the first embodiment, and thus the redundant description will be omitted.
 図7は、実施の形態2の表示編集装置1aの構成例を示すブロック図である。
 図7において、実施の形態1で図2を用いて説明した構成と同様の構成については同じ符号を付して重複した説明を省略する。
 表示編集装置1aは、実施の形態1の表示編集装置1とは、図形補完部120をさらに備える点が異なる。
FIG. 7 is a block diagram showing a configuration example of the display editing device 1a according to the second embodiment.
In FIG. 7, the same components as those described with reference to FIG. 2 in the first embodiment are denoted by the same reference numerals, and the description thereof will not be repeated.
The display editing apparatus 1a differs from the display editing apparatus 1 according to the first embodiment in that the display editing apparatus 1a further includes a figure complementing unit 120.
 図形補完部120は、ユーザによる第1の図形情報の描画が始まる前の任意のタイミングで、表示制御部117に対して、表示部102に表示された画像上に重畳してガイドを表示させる。また、図形受付部112が受け付けた第1の図形情報を取得すると、当該第1の図形情報に基づき、第1の図形がプロジェクタ2側または自動車側から見て左右対称となるよう補完し、補完後の第1の図形の図形情報(以下「補完後第1の図形情報」という。)を作成する。
 例えば、図形補完部120は、ガイドとして、プロジェクタ2側または自動車側から見て左右対称であるグリッドの画像を表示させる。また、実施の形態2において、図形補完部120は、画像内における路面の位置にガイドを重畳して表示させる。
 図形補完部120は、補完後第1の図形情報を、図形位置判定部114に出力する。また、図形補完部120は、表示制御部117にも補完後第1の図形情報を出力し、表示部102が表示している画像上に重畳して補完後第1の図形情報を表示させる。(なお、図7において、図形補完部120から表示制御部117に対する情報出力のための接続線の記載は省略している。)
 図形位置判定部114は、位置関係取得部113から出力された、路面とカメラ101の位置関係の情報、および、プロジェクタ2とカメラ101の位置関係の情報と、図形補完部120から出力された補完後第1の図形情報とに基づき、補完後の第1の図形が、路面上のどの位置に対応付けられるかを判定する。
The graphic complementing unit 120 causes the display control unit 117 to display a guide superimposed on the image displayed on the display unit 102 at an arbitrary timing before the drawing of the first graphic information by the user starts. Further, when the first graphic information received by the graphic receiving unit 112 is acquired, the first graphic is complemented so as to be symmetrical with respect to the projector 2 side or the automobile side based on the first graphic information, and the complementation is performed. The graphic information of the later first graphic (hereinafter referred to as “first graphic information after complementation”) is created.
For example, the figure complementing unit 120 displays, as a guide, an image of a grid that is symmetrical with respect to the projector 2 side or the vehicle side. In the second embodiment, the figure complementing unit 120 superimposes and displays the guide on the position of the road surface in the image.
The figure complementing unit 120 outputs the first figure information after complementation to the figure position determination unit 114. Further, the figure complementing unit 120 also outputs the first figure information after complementation to the display control unit 117, superimposes it on the image displayed by the display unit 102, and causes the first figure information after complementation to be displayed. (In FIG. 7, the description of connection lines for outputting information from the figure complementing unit 120 to the display control unit 117 is omitted.)
The figure position determination unit 114 outputs the information on the positional relationship between the road surface and the camera 101, the information on the positional relationship between the projector 2 and the camera 101, and the complementation output from the figure complementing unit 120, which are output from the positional relationship acquisition unit 113. Based on the post-first graphic information, it is determined which position on the road surface the first graphic after complementation is associated with.
 表示編集装置1aのハードウェア構成は、実施の形態1において図6A、および、図6Bを用いて説明した構成と同様であるため、重複した説明を省略する。
 図形補完部120は、撮像画像取得部111、図形受付部112、位置関係取得部113、図形位置判定部114、座標変換部115、指示部116、表示制御部117、および、音声出力制御部118同様、処理回路601により実現される。
The hardware configuration of the display editing apparatus 1a is the same as the configuration described with reference to FIGS. 6A and 6B in the first embodiment, and thus the redundant description will be omitted.
The figure complementing unit 120 includes a captured image acquisition unit 111, a figure reception unit 112, a positional relationship acquisition unit 113, a figure position determination unit 114, a coordinate conversion unit 115, an instruction unit 116, a display control unit 117, and an audio output control unit 118. Similarly, it is realized by the processing circuit 601.
 次に、実施の形態2の表示編集装置1aの動作について説明する。
 図8は、実施の形態2の表示編集装置1aの動作を説明するフローチャートである。
 図8のステップST801,803~809の具体的な動作は、実施の形態1で説明した、図4のステップST401~408の具体的な動作と同様であるため重複した説明を省略する。
 実施の形態2では、図4を用いて説明した動作に対して、図8のステップST802の動作が追加になっているのみである。なお、ST801において図形受付部112が、第1の図形情報を受け付ける前に、図形補完部120によって、表示部102にガイドが既に表示されているものとする。
Next, the operation of the display editing apparatus 1a according to the second embodiment will be described.
FIG. 8 is a flowchart for explaining the operation of the display editing device 1a according to the second embodiment.
The specific operations of steps ST 801 and 803 to 809 in FIG. 8 are the same as the specific operations of steps ST 401 to 408 in FIG. 4 described in the first embodiment, and therefore the description thereof will not be repeated.
In the second embodiment, only the operation of step ST802 of FIG. 8 is added to the operation described with reference to FIG. Here, it is assumed that the guide is already displayed on the display unit 102 by the figure complementing unit 120 before the figure receiving unit 112 receives the first figure information in ST801.
 ステップST801において、図形受付部112は、第1の図形情報を受け付けると、受け付けた第1の図形情報を図形補完部120に出力する。
 図形補完部120は、図形受付部112から出力された第1の図形情報に基づき、第1の図形が左右対称となるよう補完し、補完後第1の図形情報を作成する。
 図形補完部120は、補完後第1の図形情報を、図形位置判定部114に出力する。また、図形補完部120は、表示制御部117にも補完後第1の図形情報を出力する。表示制御部117は、補完後第1の図形情報を取得すると表示部102が表示している画像上に重畳して補完後第1の図形情報が示す補完後の第1の図形を表示させる。
In step ST801, when the graphic receiving unit 112 receives the first graphic information, the graphic receiving unit 112 outputs the received first graphic information to the graphic complementing unit 120.
The figure complementing unit 120 complements the first figure so as to be symmetrical on the basis of the first figure information output from the figure accepting unit 112, and creates the first figure information after complementation.
The figure complementing unit 120 outputs the first figure information after complementation to the figure position determination unit 114. Further, the figure complementing unit 120 also outputs the complemented first figure information to the display control unit 117. When acquiring the first figure information after complementation, the display control unit 117 causes the first figure after complementation indicated by the first figure information after complementation to be superimposed on the image displayed by the display unit 102.
 ここで、図9は、実施の形態2において、ガイドおよび補完後の第1の図形が表示された表示部102の画面の一例を示す図である。
 図9では、一例として、ユーザは、第1の図形として、斜めの線(図9の9a)を入力したものとしている。
 図形補完部120は、ユーザが入力した第1の図形が、プロジェクタ2側または自動車側から見て左右対称になるよう、斜めの線(図9の9b)を補完する。
Here, FIG. 9 is a diagram showing an example of a screen of the display unit 102 on which the guide and the first figure after complementation are displayed in the second embodiment.
In FIG. 9, as an example, it is assumed that the user inputs an oblique line (9a in FIG. 9) as the first figure.
The figure complementing unit 120 complements the oblique line (9b in FIG. 9) so that the first figure input by the user is symmetrical with respect to the projector 2 side or the car side.
 位置関係取得部113は、路面とカメラ101、および、プロジェクタ2とカメラ101の位置関係を取得し(ステップST803)、図形位置判定部114は、位置関係取得部113から出力された、路面とカメラ101との位置関係の情報、および、プロジェクタ2とカメラ101との位置関係の情報と、図形補完部120から出力された補完後第1の図形情報とに基づき、補完後の第1の図形が、路面上のどの位置に対応付けられるかを判定する(ステップST804)。 The positional relationship acquisition unit 113 acquires the positional relationship between the road surface and the camera 101, and the projector 2 and the camera 101 (step ST803), and the graphic position determination unit 114 outputs the road surface and the camera output from the positional relationship acquisition unit 113. The first figure after complementation is based on the information on the positional relationship with 101, the information on the positional relationship between the projector 2 and the camera 101, and the first complemented first graphic information output from the graphic complement unit 120. It is determined to which position on the road surface it corresponds (step ST804).
 なお、以上の実施の形態2では、図形補完部120は、ガイドとしてグリッドを表示させるようにしたが、これは一例に過ぎない。図形補完部120は、プロジェクタ2側または自動車側から見て左右対称となる図形を編集することを意図するユーザに対し、入力を補助するような任意の情報をガイドとして表示させればよい。
 また、図形補完部120は、ガイドは表示させず、第1の図形が左右対称となるよう当該第1の図形を補完し、補完された左右対称な第1の図形のみを表示させるようにしてもよい。
In the second embodiment described above, the figure complementing unit 120 displays the grid as a guide, but this is merely an example. The figure complementing unit 120 may display, as a guide, arbitrary information for assisting an input to a user who intends to edit a figure that is symmetrical when viewed from the projector 2 side or the car side.
Also, the figure complementing unit 120 does not display the guide, but complements the first figure so that the first figure is symmetrical left and right so that only the complemented first symmetrical figure is displayed. It is also good.
 また、以上の実施の形態2では、図形補完部120は、第1の図形が左右対称となるよう、当該第1の図形を補完するものとしたが、補完の方法は左右対称に限らない。図形補完部120は、第1の図形を、プロジェクタ側または自動車側からみて、例えば、上下対称となるよう補完してもよいし、上下左右対称となるよう補完してもよいし、三方対称となるよう補完してもいし、点対称となるよう補完してもよい。 In the second embodiment, the figure complementing unit 120 complements the first figure so that the first figure is symmetrical. However, the method of complementation is not limited to the symmetrical. The figure complementing unit 120 may complement the first figure so as to be, for example, vertically symmetrical or vertically symmetrical as viewed from the projector side or the automobile side, or may be complemented as being vertically symmetrical It may be complemented so that it becomes, or complemented so that it becomes point-symmetrical.
 以上のように、実施の形態2によれば、図形受付部112が受け付けた第1の図形情報に基づき、第1の図形が左右対称となるよう第1の図形を補完し、補完後第1の図形情報を作成する図形補完部120を備え、座標変換部115は、位置関係取得部113が取得した位置関係と、図形補完部120が作成した補完後第1の図形情報から、第2の図形情報を算出するようにした。これにより、ユーザは任意の視点から図形の作成を行った場合も、描画装置からみて対称形となった図形が表示されるようにすることができ、品質の高い図形の作成を行うことができる。 As described above, according to the second embodiment, the first figure is complemented so that the first figure is symmetrical based on the first figure information received by the figure accepting unit 112, and the first after interpolation is performed. The coordinate conversion unit 115 generates the second figure from the positional relation acquired by the positional relation acquisition unit 113 and the first figure information after complementation generated by the figure complement unit 120. It was made to calculate figure information. Thus, even when the user creates a figure from an arbitrary viewpoint, the figure which is symmetrical as viewed from the drawing device can be displayed, and a figure of high quality can be created. .
実施の形態3.
 実施の形態1,2では、例えば、プロジェクタ2が取り付けられた自動車は停止しているものとし、自動車が走行することによりプロジェクタ2の位置も移動する場合、あるいは自動車の走行状態が変化する場合等、プロジェクタ2に関連する状況が変化することは想定していなかった。
 実施の形態3では、プロジェクタ2に関連する状況が変化する場合も、当該状況の変化にあわせて、ユーザの所望の図形を、プロジェクタ2から投影できる実施の形態について説明する。
 なお、実施の形態3においても、実施の形態1,2同様、一例として、表示編集装置1b(後述する)は、タブレットPC100に搭載され、ユーザは、路面周辺の路上にいる視認者が、自動車に取り付けられたプロジェクタ2によって当該路面上に投影される静止画を見る際に視認できる所望の静止画を、タブレットPC100を用いて描画し、表示編集装置は、当該ユーザが描画した静止画に基づいて、プロジェクタ2に投影を指示するための図形情報を編集するものとする。
Third Embodiment
In the first and second embodiments, for example, it is assumed that the car attached with the projector 2 is at a standstill, the position of the projector 2 is also moved as the car travels, or the travel state of the car changes The situation related to the projector 2 was not supposed to change.
In the third embodiment, even when the situation related to the projector 2 changes, an embodiment in which the projector 2 can project a figure desired by the user according to the change of the situation will be described.
Also in the third embodiment, as in the first and second embodiments, as an example, the display editing apparatus 1b (described later) is mounted on the tablet PC 100, and the user is a viewer on the road around the road surface. Is drawn using the tablet PC 100 to display a desired still image that can be visually recognized when viewing a still image projected on the road surface by the projector 2 attached to the display editing device, based on the still image drawn by the user It is assumed that graphic information for instructing the projector 2 to project is edited.
 図10は、実施の形態3の表示編集装置1bの構成例を示すブロック図である。
 図10において、実施の形態1で図2を用いて説明した構成と同様の構成については同じ符号を付して重複した説明を省略する。
 表示編集装置1bは、実施の形態1の表示編集装置1とは、状況情報取得部121、および、図形情報取得部122をさらに備える点が異なる。
FIG. 10 is a block diagram showing a configuration example of the display editing device 1b according to the third embodiment.
In FIG. 10, the same components as those described with reference to FIG. 2 in the first embodiment are denoted by the same reference numerals, and redundant description will be omitted.
The display editing apparatus 1b differs from the display editing apparatus 1 according to the first embodiment in that the display editing apparatus 1b further includes a status information acquisition unit 121 and a graphic information acquisition unit 122.
 実施の形態3の表示編集システムの構成は、実施の形態1において、図1を用いて説明した構成と同様であるため、重複した説明を省略する。また、実施の形態3のプロジェクタ2の構成は、実施の形態1において、図3を用いて説明した構成と同様であるため、重複した説明を省略する。 The configuration of the display editing system according to the third embodiment is the same as the configuration described with reference to FIG. 1 in the first embodiment, and therefore redundant description will be omitted. Further, the configuration of the projector 2 according to the third embodiment is the same as the configuration described with reference to FIG. 3 in the first embodiment, and thus the redundant description will be omitted.
 状況情報取得部121は、プロジェクタ2の状況情報を取得する。実施の形態3では、状況情報取得部121は、プロジェクタ2が取り付けられている自動車から、車両に関する車両情報を、プロジェクタ2の状況情報として取得するものとする。車両情報とは、自動車に取り付けられたプロジェクタ2に関する情報も含む、自動車に関する種々の情報であって、プロジェクタ2から投影される静止画を変更すべき契機となりうる任意の情報である。例えば、車両情報は、自動車の位置情報、もしくは、自動車に取り付けられているプロジェクタ2の位置情報、または、自動車が駐車している状態、自動車が走行している方向、もしくは、自動車の走行速度等の自動車の走行状態に関する情報である。
 状況情報取得部121は、取得した車両情報を、図形情報取得部122に出力する。
The status information acquisition unit 121 acquires status information of the projector 2. In the third embodiment, the situation information acquisition unit 121 acquires vehicle information on the vehicle as the situation information of the projector 2 from the car to which the projector 2 is attached. The vehicle information is various information related to the vehicle including the information related to the projector 2 attached to the vehicle, and is any information that can be a trigger for changing the still image projected from the projector 2. For example, the vehicle information may be position information of the car, position information of the projector 2 attached to the car, a state in which the car is parked, a direction in which the car is traveling, or a traveling speed of the car Information on the driving condition of the car.
The situation information acquisition unit 121 outputs the acquired vehicle information to the graphic information acquisition unit 122.
 図形情報取得部122は、状況情報取得部121から出力された車両情報が示すプロジェクタ2の状況と、当該状況において算出された第2の図形情報を関連付けて、記憶部119に記憶させる。したがって、記憶部119には、実施の形態1,2で説明した情報に加えて、プロジェクタ2の状況と当該状況に対応する第2の図形情報とが関連付けられて記憶される。
 例えば、ユーザは、想定される視認者がある1つの位置にとどまっているとして、当該位置に対して、複数の異なる地点に自動車がある場合のそれぞれにおいて、視認者が視認すべき第1の図形を、タブレットPC100の画面上で描画することで、複数の第1の図形を描画する。表示編集装置1bは、複数の第1の図形を示す複数の第1の図形情報のそれぞれに対して、第2の図形情報を算出する。図形情報取得部122は、複数の第2の図形情報のそれぞれに対して、関連する自動車の位置情報を関連付けて、記憶部119に記憶させる。詳細は後述する。
The graphic information acquisition unit 122 stores the situation of the projector 2 indicated by the vehicle information output from the situation information acquisition unit 121 in association with the second graphic information calculated in the situation in the storage unit 119. Therefore, in addition to the information described in the first and second embodiments, the storage unit 119 stores the situation of the projector 2 and the second graphic information corresponding to the situation in association with each other.
For example, assuming that the user is supposed to stay at one position where the viewer is supposed, a first figure to be viewed by the viewer at each of a plurality of different points with respect to the position. Are drawn on the screen of the tablet PC 100 to draw a plurality of first figures. The display editing apparatus 1b calculates second graphic information for each of a plurality of first graphic information indicating a plurality of first graphics. The graphic information acquisition unit 122 associates the position information of the related automobile with each of the plurality of pieces of second graphic information, and causes the storage unit 119 to store the positional information. Details will be described later.
 表示編集装置1bのハードウェア構成は、実施の形態1において図6A、および、図6Bを用いて説明した構成と同様であるため、重複した説明を省略する。
 状況情報取得部121、および、図形情報取得部122は、撮像画像取得部111、図形受付部112、位置関係取得部113、図形位置判定部114、座標変換部115、指示部116、表示制御部117、および、音声出力制御部118と同様に、処理回路601により実現される。
The hardware configuration of the display editing apparatus 1b is the same as the configuration described with reference to FIGS. 6A and 6B in the first embodiment, and thus the redundant description will be omitted.
The status information acquisition unit 121 and the graphic information acquisition unit 122 include a captured image acquisition unit 111, a graphic reception unit 112, a positional relationship acquisition unit 113, a graphic position determination unit 114, a coordinate conversion unit 115, an instruction unit 116, and a display control unit. Similar to the audio output control unit 118 and the audio output control unit 118, the audio output control unit 118 can be realized by the processing circuit 601.
 次に、実施の形態3の表示編集装置1bの動作について説明する。
 以下の説明では、一例として、プロジェクタ2が自動車の位置に応じて投影する図形を変更する場合に、ユーザが、表示編集装置1bを使用して、それぞれの位置における図形を編集する際の動作について説明する。
 この場合、ユーザは、予め、プロジェクタ2が取り付けられた自動車が、異なる複数地点にある状態において、それぞれ、タブレットPC100のタッチパネルを操作して、表示部102に表示された画像上に、第1の図形を描画する。ここでは、複数地点は、例えば3地点であり、ユーザは、自動車が当該3地点のそれぞれにある状態に対して、第1の図形を描画するものとする。
Next, the operation of the display editing device 1b of the third embodiment will be described.
In the following description, as an example, when the projector 2 changes the figure to be projected according to the position of the car, the user operates the display editing apparatus 1b to edit the figure at each position explain.
In this case, the user operates the touch panel of the tablet PC 100 in advance in a state where the vehicle to which the projector 2 is attached is at a plurality of different points, and causes the first displayed on the image displayed on the display unit 102. Draw a figure. Here, it is assumed that the plurality of points are, for example, three points, and the user draws a first figure for a state in which the car is at each of the three points.
 表示編集装置1bは、3地点において、その都度、ユーザが描画した第1の図形を受け付け、路面とカメラ101との位置関係、および、プロジェクタ2とカメラ101との位置関係を判定し、判定した位置関係の情報と、第1の図形情報とから、プロジェクタ2が路面に投影すべき第2の図形情報を算出する。当該動作は、実施の形態1において、図4のフローチャートを用いて説明した動作と同様であるが、実施の形態3では、さらに、表示編集装置1bにおいて、以下の動作を行う。 The display editing apparatus 1b receives and determines the positional relationship between the road surface and the camera 101, and the positional relationship between the projector 2 and the camera 101, each time at the three points, accepting the first figure drawn by the user. From the positional relationship information and the first graphic information, second graphic information to be projected on the road surface by the projector 2 is calculated. The operation is the same as the operation described using the flowchart in FIG. 4 in the first embodiment, but in the third embodiment, the following operation is further performed in the display editing device 1b.
 表示編集装置1bにおいて、座標変換部115は、第2の図形情報を算出すると(図4のステップST405参照)、当該第2の図形情報を、指示部116および図形情報取得部122に出力する。図形情報取得部122は、当該第2の図形情報を、状況情報取得部121から出力された車両情報が示すプロジェクタ2の位置と関連付けて、記憶部119に記憶させる。
 座標変換部115が、第2の図形情報を図形情報取得部122に出力するタイミングは、少なくとも、プロジェクタ2の1つの位置に対して、座標変換部115によって算出された1つの第2の図形情報が記憶されるタイミングであればよい。例えば、当該タイミングは、ユーザが、プロジェクタ2の1つの位置に対して、第1の図形を描画終了した場合に、表示部102に表示された記憶ボタンをタッチする等して指示したタイミングとすることができる。
 これにより、想定される視認者の1つの位置からみて、自動車が異なる3地点にある場合のそれぞれにおいて、プロジェクタ2の位置と第2の図形情報とが関連付けられて、記憶部119に記憶されることになる。
In the display editing apparatus 1b, when the coordinate conversion unit 115 calculates second graphic information (see step ST405 in FIG. 4), the coordinate conversion unit 115 outputs the second graphic information to the instruction unit 116 and the graphic information acquisition unit 122. The graphic information acquisition unit 122 stores the second graphic information in the storage unit 119 in association with the position of the projector 2 indicated by the vehicle information output from the situation information acquisition unit 121.
The timing at which the coordinate conversion unit 115 outputs the second graphic information to the graphic information acquisition unit 122 is at least one second graphic information calculated by the coordinate conversion unit 115 for one position of the projector 2. As long as it is stored. For example, when the user has finished drawing the first figure at one position of the projector 2, the user designates the timing when touching the storage button displayed on the display unit 102 or the like. be able to.
As a result, the position of the projector 2 and the second graphic information are associated with each other and stored in the storage unit 119 in each of three cases where the vehicle is at three different points in view of one assumed position of the viewer. It will be.
 以上の説明では、記憶部119に第2の図形情報と関連付けて記憶させておく車両情報をプロジェクタ2の位置情報とする場合の、表示編集装置1bの動作について説明した。しかし、これは一例に過ぎず、表示編集装置1bは、プロジェクタ2の位置情報以外の車両情報と第2の図形情報とを関連付けて記憶部119に記憶させることもできるので、以下、例をあげて説明する。 In the above description, the operation of the display editing apparatus 1b in the case where the vehicle information stored in the storage unit 119 in association with the second graphic information is used as the positional information of the projector 2 has been described. However, this is only an example, and the display / editing apparatus 1b can associate the second graphic information with the vehicle information other than the position information of the projector 2 and store it in the storage unit 119. Explain.
 例えば、表示編集装置1bは、車両情報として、自動車の駐車状態または自動車が走行する方向に関する情報を取得し、それぞれの情報が示す状態と関連付けて、それぞれ異なる第2の図形情報を記憶部119に記憶させることができる。
 なお、この場合の車両情報は、図形の編集用に用意された情報であり、自動車が、実際に当該車両情報が示す状態である必要はない。
For example, the display editing apparatus 1b acquires, as the vehicle information, information on the parking state of the car or the direction in which the car travels, associates the second graphic information different from each other in the storage unit 119 with the states indicated by the respective information. It can be memorized.
The vehicle information in this case is information prepared for graphic editing, and the vehicle does not have to be in a state actually indicated by the vehicle information.
 また、例えば、表示編集装置1bは、車両情報として、自動車の走行速度の情報を取得し、それぞれの情報が示す状態と関連付けて、それぞれ異なる第2の図形情報を記憶部119に記憶させることができる。
 なお、この場合の車両情報も、図形の編集用に用意された情報であり、自動車が、実際に当該車両情報が示す走行速度で走行している状態である必要はない。また、この場合、走行速度を、例えば、10km/時毎に区切った複数の走行速度のそれぞれに対して、ユーザが第1の図形を描画し、図形情報取得部122は、それぞれの第1の図形を示す第1の図形情報に対して、座標変換部115が算出した異なる第2の図形情報を走行速度と関連付けて、記憶部119に記憶させるようにすればよい。
Further, for example, the display editing device 1b may obtain, as the vehicle information, information on the traveling speed of the car, and store the different pieces of second graphic information in the storage unit 119 in association with the states indicated by the respective information. it can.
The vehicle information in this case is also information prepared for graphic editing, and it is not necessary for the vehicle to actually travel at the traveling speed indicated by the vehicle information. Also, in this case, the user draws a first figure for each of a plurality of traveling speeds obtained by dividing the traveling speed, for example, every 10 km / hour, and the graphic information acquisition unit 122 generates the first graphic. With respect to the first graphic information indicating the graphic, the different second graphic information calculated by the coordinate conversion unit 115 may be stored in the storage unit 119 in association with the traveling speed.
 また、例えば、表示編集装置1bは、自動車の走行速度の閾値と、自動車の走行速度が閾値未満の場合の第2の図形と、自動車の走行速度が閾値以上となった場合の第2の図形の変形の態様を、記憶部119に記憶させておくようにしてもよい。第2の図形の変形の態様は、例えば、第2の図形の拡大、または、点滅等とすることができる。 Further, for example, the display editing device 1b may use the threshold of the traveling speed of the car, the second figure when the traveling speed of the car is less than the threshold, and the second figure when the traveling speed of the car becomes equal to or more than the threshold. The aspect of the modification may be stored in the storage unit 119. The mode of deformation of the second graphic can be, for example, enlargement of the second graphic, or blinking.
 このように、表示編集装置1bにおいて、どの車両情報が示すプロジェクタ2の状況に関連付けて、どのような第2の図形を記憶させるかは、適宜設定可能である。 As described above, in the display editing device 1b, it is possible to appropriately set which second graphic is to be stored in association with the situation of the projector 2 indicated by which vehicle information.
 また、表示編集装置1bにおいて、記憶部119に記憶された複数の第2の図形情報については、プロジェクタ2の1つの状況に関連付けられた第2の図形情報と、プロジェクタ2の他の状況に関連付けられた第2の図形情報との間を、既存の画像処理技術を用いて補完して、各状況間の中間的な状況において使用される第2の図形情報を算出することもできる。例えば、想定される視認者の1つの位置からみて、自動車が異なる3地点にある場合のそれぞれにおいて、プロジェクタ2の位置と第2の図形情報とが関連付けられて、記憶部119に記憶されている場合、すでに記憶されている3つの第2の図形情報のうち、隣り合う2地点に関連付けられた2つの第2の図形情報を、連続的に変形させるような画像処理を行い、当該2地点の間に自動車がある場合における第2の図形情報を、算出することができる。
 この場合、プロジェクタ2は、自動車の移動に応じて、連続的に変化する第2の図形情報を路面に投影することができ、視認者にとっても違和感のない図形表示を行うことができる。
Further, in the display editing device 1 b, the plurality of pieces of second graphic information stored in the storage unit 119 are associated with the second graphic information associated with one situation of the projector 2 and the other situations of the projector 2 It is possible to complement the existing second image information with the second image information to calculate the second image information used in an intermediate situation between each situation. For example, the position of the projector 2 and the second graphic information are associated with each other and stored in the storage unit 119 in each of three cases where the car is at three different points in view of one assumed position of the viewer. In this case, image processing is performed to continuously deform two pieces of second graphic information associated with two adjacent points among the three pieces of second graphic information already stored, and Second graphic information in the case where there is a car between can be calculated.
In this case, the projector 2 can project the continuously changing second graphic information on the road surface according to the movement of the automobile, and can perform graphic display without a sense of discomfort for the viewer.
 以上のように、実施の形態3によれば、描画装置の状況を取得する状況情報取得部121を備え、第2の図形情報と、状況情報取得部121により取得された、当該第2の図形情報が算出された際の描画装置の状況を示す情報とを関連付けて、記憶部119に記憶させるようにした。これにより、描画装置に関わる状況の変化が生じる場合においても、状況に応じて、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形を認識できるように、描画装置に描画を指示するための図形情報を編集することができる。 As described above, according to the third embodiment, the second graphic information including the second graphic information and the second graphic information acquired by the state information acquiring unit 121 that acquires the state of the drawing apparatus is obtained. The storage unit 119 stores the information in association with the information indicating the state of the drawing apparatus when the information is calculated. Thereby, even when a change in the situation regarding the drawing device occurs, depending on the situation, the viewer can recognize a desired figure when looking at the figure drawn on the object from the non-facing position. The graphic information for instructing the drawing device to draw can be edited.
 なお、以上の実施の形態3では、当該実施の形態3を実施の形態1に適用し、表示編集装置1bは、実施の形態1の表示編集装置1に対して、状況情報取得部121と図形情報取得部122とをさらに備えるものとした。しかしながら、これに限らず、実施の形態3を実施の形態2に適用するようにしてもよい。すなわち、表示編集装置1bは、図7を用いて説明した実施の形態2の表示編集装置1aに対して、状況情報取得部121と図形情報取得部122とをさらに備えるようにしてもよい。 In the third embodiment described above, the third embodiment is applied to the first embodiment, and the display editing device 1 b is different from the display editing device 1 of the first embodiment in the situation information acquisition unit 121 and the figure. The information acquisition unit 122 is further provided. However, the present invention is not limited to this, and the third embodiment may be applied to the second embodiment. That is, the display editing apparatus 1 b may further include a status information acquisition unit 121 and a graphic information acquisition unit 122 in addition to the display editing apparatus 1 a of the second embodiment described with reference to FIG. 7.
 また、以上の実施の形態1~3では、一例として、自動車に取り付けられたプロジェクタ2は、路面に対して静止画としての図形を投影するものであり、表示編集装置1~1bは、静止画としての図形を編集するものとした。
 しかしながら、これは一例に過ぎず、自動車に取り付けられたプロジェクタ2が、路面に対して動画としての図形を投影するものである場合、表示編集装置1~1bは、動画としての図形を編集するものとすることもできる。この場合、例えば、ユーザは、既存の描画アプリ等を使用し、第1の図形としてアニメーション等の動画を描画するようにすればよい。
Further, in the above first to third embodiments, as an example, the projector 2 attached to a car projects a figure as a still image on the road surface, and the display editing device 1 to 1b is a still image It is assumed that the figure is edited.
However, this is only an example, and when the projector 2 attached to a car projects a graphic as a moving image on a road surface, the display editing apparatus 1-1 b edits the graphic as a moving image It can also be done. In this case, for example, the user may draw an animation such as an animation as the first figure using an existing drawing application or the like.
 また、表示編集装置1~1bにおいて算出された第2の図形情報は、当該第2の図形情報が静止画または動画のいずれであっても、任意のデータ形式で取り出すことができる。したがって、例えば、一度表示編集装置1~1bにおいて、第1の図形情報に基づき算出された第2の図形情報が示す第2の図形を、別の自動車に取り付けられたプロジェクタからも投影させたい場合は、表示編集装置1~1bから第2の図形情報を取り出してデータを別の自動車に取り付けられたプロジェクタを制御する制御装置が有する記憶装置にコピーすることで、同じ第2の図形を、別の自動車に取り付けられたプロジェクタから投影させて再現することができる。 In addition, the second graphic information calculated in the display editing apparatus 1-1 b can be extracted in any data format, regardless of whether the second graphic information is a still image or a moving image. Therefore, for example, when it is desired that the second graphic indicated by the second graphic information calculated based on the first graphic information is also projected from the projector attached to another car in the display editing device 1 to 1b. Takes out the second graphic information from the display editing device 1-1 b and copies the data to a storage device of a control device that controls a projector attached to another car, It can be projected and reproduced from a projector attached to a car of
 また、以上の実施の形態1~3では、表示編集装置1~1bは、タブレットPC100に搭載されるものとしたが、これに限らない。表示編集装置1~1bが搭載される装置は、ユーザが、表示画面上で、あるいは、直接に、第2の図形が描画される対象となる実空間を見ることができ、当該ユーザが、実空間に重なって見えるように、静止画または動画を入力して表示できるような装置であればどのようなものでもよい。例えば、表示編集装置1~1bが搭載される装置を、通常のPCのディスプレイ背面にビデオカメラが設置されたものとし、ユーザが、ビデオカメラで実空間を撮像し、PCの表示画面上に表示された実空間の画像上に第1の図形を描画できるような構成としてもよい。また、例えば、表示編集装置1~1bが搭載される装置を、シースルー型のヘッドマウントディスプレイとし、ユーザが当該ヘッドマウントディスプレイを装着するとともに、当該ユーザの指先を三次元トラッカで認識するようにして、ユーザが直接実空間を見ながら、当該実空間に重ねて第1の図形を描画できるような構成としてもよい。 In the first to third embodiments described above, the display editing devices 1 to 1b are mounted on the tablet PC 100, but the present invention is not limited to this. The device on which the display editing device 1-1 b is mounted can view the real space on which the second graphic is drawn on the display screen or directly by the user, and the user can Any device can be used as long as it can input and display a still image or a moving image so as to overlap and be seen in space. For example, a video camera is installed on the back of the display of a normal PC, and the user captures an image of the real space with the video camera and displays it on the display screen of the PC. The first figure may be drawn on the real space image. Also, for example, an apparatus on which the display editing apparatus 1 to 1b is mounted is a see-through head mounted display, and the user mounts the head mounted display and recognizes the user's fingertips with a three-dimensional tracker. The configuration may be such that the user can draw the first figure over the real space directly while looking at the real space.
 また、以上の実施の形態1~3では、図形が描画される対象物を路面とし、描画装置は、路面に光を投影するプロジェクタ2としたが、これに限らない。
 例えば、描画装置は、マーカ等を保持したアームによりホワイトボード等に描画するロボットであっても良いし、テープ等を使って地面等に描画を行う自走式のロボットであってもよい。
 このように、対象物は、路面に限らず、ホワイトボード等の任意の面とすることができる。また、対象物は、凹凸を含む面であってもよい。また、描画装置は、プロジェクタ2に限らず、ロボットのアーム等、図形を描画させることができる任意の装置とすることができる。また、描画装置は、機械的に形状が変わるマスクが光源に取り付けられ、マスクの形状を変化させることによって、図形が視認されるような装置であってもよい。
In the first to third embodiments, the target on which the graphic is drawn is the road surface, and the drawing device is the projector 2 that projects light on the road surface. However, the present invention is not limited to this.
For example, the drawing apparatus may be a robot that draws on a whiteboard or the like by an arm holding a marker or the like, or may be a self-propelled robot that draws on a ground or the like using a tape or the like.
As described above, the object is not limited to the road surface, and may be an arbitrary surface such as a white board. Further, the object may be a surface including unevenness. Further, the drawing device is not limited to the projector 2 but may be any device capable of drawing a figure, such as an arm of a robot. Further, the drawing apparatus may be an apparatus in which a mask whose shape is mechanically changed is attached to a light source, and a figure is visually recognized by changing the shape of the mask.
 以上の実施の形態1~3では、描画装置が光を路面に投影するプロジェクタ2であり、表示編集装置1~1bが、当該プロジェクタ2に所望の図形を投影させるための第2の図形情報を編集するものであった。この場合、プロジェクタ2と路面との位置関係等によって、プロジェクタ2から投影される第2の図形は、路面上では変形することがあるため、表示編集装置1~1bの座標変換部115は、路面とカメラ101との位置関係の情報と、プロジェクタ2とカメラ101との位置関係の情報とに基づき、第2の図形情報を算出するようにした。
 しかしながら、例えば、描画装置がマーカ等を保持したアームによりホワイトボード等に描画するロボット、または、テープ等を使って地面等に描画を行う自走式のロボットの場合には、描画される図形は第2の図形そのものとなるため、カメラ101と対象物との位置関係さえわかれば、座標変換部115は、第2の図形情報を算出することができる。
 したがって、この場合、位置関係取得部113は、カメラ101と対象物との位置関係を取得するようになっていればよく、座標変換部115は、カメラ101と対象物との位置関係の情報と、第1の図形情報から、第2の図形情報を算出するようにすればよい。
In the above first to third embodiments, the drawing device is the projector 2 that projects light on the road surface, and the display editing devices 1 to 1b use the second graphic information for causing the projector 2 to project a desired graphic. It was to edit. In this case, the second figure projected from the projector 2 may be deformed on the road surface due to the positional relationship between the projector 2 and the road surface, etc. Therefore, the coordinate conversion unit 115 of the display editing device 1-1 b The second graphic information is calculated based on the information of the positional relationship between the camera 2 and the camera 101 and the positional relationship between the projector 2 and the camera 101.
However, for example, in the case of a robot in which a drawing device draws on a whiteboard or the like with an arm holding a marker or a self-propelled robot for drawing on the ground or the like using a tape or the like, Since the second graphic itself is obtained, the coordinate conversion unit 115 can calculate the second graphic information as long as the positional relationship between the camera 101 and the object is known.
Therefore, in this case, it is sufficient that the positional relationship acquisition unit 113 acquire the positional relationship between the camera 101 and the target object, and the coordinate conversion unit 115 calculates the positional relationship information between the camera 101 and the target object. The second graphic information may be calculated from the first graphic information.
 また、本願発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 Furthermore, within the scope of the present invention, the present invention allows free combination of each embodiment, or modification of any component of each embodiment, or omission of any component in each embodiment. .
 この発明に係る表示編集装置は、視認者が対象物上に描画される図形を非正対位置から見る際に所望の図形を視認できるように、描画装置に描画を指示するための図形情報を編集することができるため、任意の対象物上に図形の描画を行う描画装置に描画を指示するための図形情報を編集する表示編集装置等に適用することができる。 In the display editing apparatus according to the present invention, graphic information for instructing the drawing apparatus to draw so that a desired figure can be visually recognized when the viewer looks at the figure drawn on the object from the non-facing position. Since editing is possible, the present invention can be applied to a display editing apparatus or the like for editing graphic information for instructing drawing to draw a figure on an arbitrary target object.
 1 表示編集装置、2 プロジェクタ、51 自動車、100 タブレットPC、101 カメラ、102 表示部、103 音声出力装置、111 撮像画像取得部、112 図形受付部、113 位置関係取得部、114 図形位置判定部、115 座標変換部、116 指示部、117 表示制御部、118 音声出力制御部、119 記憶部、120 図形補完部、121 状況情報取得部、122 図形情報取得部。 Reference Signs List 1 display editing apparatus, 2 projector, 51 car, 100 tablet PC, 101 camera, 102 display unit, 103 audio output device, 111 captured image acquisition unit, 112 graphic reception unit, 113 positional relationship acquisition unit, 114 graphic position determination unit, 115 coordinate conversion unit, 116 instruction unit, 117 display control unit, 118 audio output control unit, 119 storage unit, 120 figure complementation unit, 121 situation information acquisition unit, 122 graphic information acquisition unit.

Claims (9)

  1.  カメラが対象物を撮像した画像を取得する撮像画像取得部と、
     前記画像上に描画された第1の図形に関する第1の図形情報を受け付ける図形受付部と、
     前記画像が撮像された際の前記対象物と前記カメラとの位置関係を取得する位置関係取得部と、
     前記位置関係取得部が取得した位置関係と、前記第1の図形情報とから、第2の図形情報を算出する座標変換部と、
     前記座標変換部が算出した第2の図形情報を、前記対象物上への描画を行う描画装置に出力する指示部
     とを備えた表示編集装置。
    A captured image acquisition unit that acquires an image obtained by capturing an object by a camera;
    A graphic receiving unit that receives first graphic information on a first graphic drawn on the image;
    A positional relationship acquisition unit that acquires a positional relationship between the object and the camera when the image is captured;
    A coordinate conversion unit that calculates second graphic information from the positional relationship acquired by the positional relationship acquisition unit and the first graphic information;
    An instruction unit that outputs the second graphic information calculated by the coordinate conversion unit to a drawing device that performs drawing on the object.
  2.  前記描画装置は前記対象物上に光を投影する投影機であり、
     前記位置関係取得部は、前記画像が撮像された際の前記対象物と前記カメラとの位置関係に加えて、前記画像が撮像された際の前記描画装置と前記対象物との位置関係を取得し、
     前記座標変換部は、前記位置関係取得部が取得した、前記対象物と前記カメラとの位置関係と、前記描画装置と前記対象物との位置関係と、前記第1の図形情報とから、前記第2の図形情報を算出する
     ことを特徴とする請求項1記載の表示編集装置。
    The drawing device is a projector for projecting light onto the object,
    The positional relationship acquiring unit acquires the positional relationship between the drawing device and the object when the image is captured, in addition to the positional relationship between the object and the camera when the image is captured. And
    The coordinate conversion unit is configured to obtain, from the positional relationship between the object and the camera, the positional relationship between the drawing device and the object, and the first graphic information, acquired by the positional relationship acquisition unit. The display editing apparatus according to claim 1, wherein the second graphic information is calculated.
  3.  前記描画装置は前記対象物上に直接に描画する装置である
     ことを特徴とする請求項1記載の表示編集装置。
    The display editing apparatus according to claim 1, wherein the drawing apparatus is an apparatus for drawing directly on the object.
  4.  前記図形受付部が受け付けた第1の図形情報に基づき、前記第1の図形の前記対象物上における形状が左右対称となるよう前記第1の図形を補完し、補完後第1の図形情報を作成する図形補完部を備え、
     前記座標変換部は、前記位置関係取得部が取得した位置関係と、前記図形補完部が作成した補完後第1の図形情報から、前記第2の図形情報を算出する
     ことを特徴とする請求項1記載の表示編集装置。
    The first figure is complemented so that the shape of the first figure on the object is symmetrical based on the first figure information accepted by the figure accepting unit, and the complemented first figure information is Equipped with a figure complementing unit to create
    The coordinate conversion unit is configured to calculate the second graphic information from the positional relationship acquired by the positional relationship acquisition unit and the post-completion first graphic information generated by the graphic complement unit. The display editing apparatus according to 1).
  5.  前記描画装置の状況を取得する状況情報取得部と、
     前記第2の図形情報と、前記状況情報取得部により取得された、当該第2の図形情報が算出された際の前記描画装置の状況を示す情報とを関連付けて、記憶部に記憶させる図形情報取得部
     とを備えた請求項1記載の表示編集装置。
    A situation information acquisition unit for acquiring the situation of the drawing apparatus;
    Graphic information to be stored in the storage unit by associating the second graphic information with the information indicating the status of the drawing apparatus when the second graphic information is calculated, acquired by the status information acquisition unit The display editing apparatus according to claim 1, further comprising: an acquisition unit.
  6.  請求項1記載の表示編集装置と、前記カメラと、前記対象物を撮像した画像を表示するとともに前記第1の図形を描画するためのタッチパネルとを備えた携帯機器。 A portable device comprising: the display editing apparatus according to claim 1; the camera; and a touch panel for displaying an image obtained by imaging the object and drawing the first figure.
  7.  前記図形受付部が受け付けた第1の図形情報に基づき、前記第1の図形の前記対象物上における形状が左右対称となるよう前記第1の図形を補完し、補完後第1の図形情報を作成する図形補完部を備え、
     前記座標変換部は、前記位置関係取得部が取得した位置関係と、前記図形補完部が作成した補完後第1の図形情報から、前記第2の図形情報を算出し、
     前記図形補完部は、
     前記タッチパネル上に、前記第1の図形を左右対称とするためのガイドを表示させる
     ことを特徴とする請求項6記載の表示編集装置。
    The first figure is complemented so that the shape of the first figure on the object is symmetrical based on the first figure information accepted by the figure accepting unit, and the complemented first figure information is Equipped with a figure complementing unit to create
    The coordinate conversion unit calculates the second graphic information from the positional relationship acquired by the positional relationship acquisition unit and the post-completion first graphic information generated by the graphic complement unit.
    The figure complementing unit
    The display editing apparatus according to claim 6, wherein a guide for making the first figure symmetrical in left and right is displayed on the touch panel.
  8.  撮像画像取得部が、カメラが対象物を撮像した画像を取得するステップと、
     図形受付部が、前記画像上に描画された第1の図形に関する第1の図形情報を受け付けるステップと、
     位置関係取得部が、前記画像が撮像された際の前記対象物と前記カメラとの位置関係を取得するステップと、
     座標変換部が、前記位置関係取得部が取得した位置関係と、前記第1の図形情報とから、第2の図形情報を算出するステップと、
     指示部が、前記座標変換部が算出した第2の図形情報を、前記対象物上への描画を行う描画装置に出力するステップ
     とを備えた表示編集方法。
    A captured image acquisition unit acquires an image obtained by imaging a target object by a camera;
    The figure accepting unit accepts first figure information on a first figure drawn on the image;
    A positional relationship acquisition unit acquiring a positional relationship between the object and the camera when the image is captured;
    A step of calculating second graphic information from the positional relationship acquired by the positional relationship acquisition unit and the first graphic information;
    And D. outputting the second graphic information calculated by the coordinate conversion unit to a drawing device for drawing on the object.
  9.  コンピュータに、
     カメラが対象物を撮像した画像を取得し、
     前記画像上に描画された第1の図形に関する第1の図形情報を受け付け、
     前記画像が撮像された際の前記対象物と前記カメラとの位置関係を取得し、
     取得した位置関係と、前記第1の図形情報とから、第2の図形情報を算出し、
     算出した第2の図形情報を、前記対象物上への描画を行う描画装置に出力する
     処理を実行させるための表示編集プログラム。
    On the computer
    The camera captures an image of the object,
    Accepting first graphic information on a first graphic drawn on the image;
    Acquiring a positional relationship between the object and the camera when the image is captured;
    Second graphic information is calculated from the acquired positional relationship and the first graphic information;
    A display editing program for executing a process of outputting the calculated second graphic information to a drawing device for drawing on the object.
PCT/JP2017/022519 2017-06-19 2017-06-19 Display editing apparatus, display editing method, and display editing program WO2018235128A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DE112017007535.1T DE112017007535B4 (en) 2017-06-19 2017-06-19 DISPLAY PROCESSING DEVICE, PORTABLE DEVICE, DISPLAY PROCESSING METHODS, AND RECORDING MEDIUM
JP2019524719A JP6671549B2 (en) 2017-06-19 2017-06-19 Display editing device, portable device, display editing method, and display editing program
PCT/JP2017/022519 WO2018235128A1 (en) 2017-06-19 2017-06-19 Display editing apparatus, display editing method, and display editing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/022519 WO2018235128A1 (en) 2017-06-19 2017-06-19 Display editing apparatus, display editing method, and display editing program

Publications (1)

Publication Number Publication Date
WO2018235128A1 true WO2018235128A1 (en) 2018-12-27

Family

ID=64735553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/022519 WO2018235128A1 (en) 2017-06-19 2017-06-19 Display editing apparatus, display editing method, and display editing program

Country Status (3)

Country Link
JP (1) JP6671549B2 (en)
DE (1) DE112017007535B4 (en)
WO (1) WO2018235128A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10217192A (en) * 1997-01-31 1998-08-18 Kyokuto Sanki Co Ltd System and method for cutting curtain
JP2003348500A (en) * 2002-03-19 2003-12-05 Fuji Photo Film Co Ltd Projection image adjustment method, image projection method, and projector
JP2010283674A (en) * 2009-06-05 2010-12-16 Panasonic Electric Works Co Ltd Projection system and projection method
JP2014176074A (en) * 2013-03-13 2014-09-22 Nippon Telegr & Teleph Corp <Ntt> Space projection device, space projection method and space projection program
WO2016014731A1 (en) * 2014-07-22 2016-01-28 Aplus Flash Technology, Inc. Yukai vsl-based vt-compensation for nand memory

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013083755A (en) 2011-10-07 2013-05-09 Canon Inc Display device, method of controlling display device, and program
JP2013235374A (en) * 2012-05-08 2013-11-21 Sony Corp Image processing apparatus, and projection control method and program
JP2016144114A (en) * 2015-02-04 2016-08-08 セイコーエプソン株式会社 Projector and method for controlling projector
JP6636252B2 (en) * 2015-03-19 2020-01-29 株式会社メガチップス Projection system, projector device, imaging device, and program
JP6631181B2 (en) * 2015-11-13 2020-01-15 セイコーエプソン株式会社 Image projection system, projector, and method of controlling image projection system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10217192A (en) * 1997-01-31 1998-08-18 Kyokuto Sanki Co Ltd System and method for cutting curtain
JP2003348500A (en) * 2002-03-19 2003-12-05 Fuji Photo Film Co Ltd Projection image adjustment method, image projection method, and projector
JP2010283674A (en) * 2009-06-05 2010-12-16 Panasonic Electric Works Co Ltd Projection system and projection method
JP2014176074A (en) * 2013-03-13 2014-09-22 Nippon Telegr & Teleph Corp <Ntt> Space projection device, space projection method and space projection program
WO2016014731A1 (en) * 2014-07-22 2016-01-28 Aplus Flash Technology, Inc. Yukai vsl-based vt-compensation for nand memory

Also Published As

Publication number Publication date
DE112017007535B4 (en) 2021-07-15
JPWO2018235128A1 (en) 2019-11-14
JP6671549B2 (en) 2020-03-25
DE112017007535T5 (en) 2020-04-02

Similar Documents

Publication Publication Date Title
TWI450180B (en) Display control device, display control method, program
JP6627861B2 (en) Image processing system, image processing method, and program
JP6048898B2 (en) Information display device, information display method, and information display program
JP2023542668A (en) Positioning tracking method and platform, head-mounted display system, and computer-readable storage medium
KR20180051288A (en) Display apparatus and control method thereof
JP2011051403A (en) Parking support system
WO2020179027A1 (en) Head-mounted information processing device and head-mounted display system
JP6110780B2 (en) Additional information display system
US20180108183A1 (en) Projecting a two-dimensional image onto a three-dimensional graphical object
JP2013149219A (en) Video processor and its control method
JP6685742B2 (en) Operating device, moving device, and control system thereof
US10623610B2 (en) Display processing device and display processing method
US20150095824A1 (en) Method and apparatus for providing user interface according to size of template edit frame
JP2012146305A (en) Device and method for providing window-like augmented reality
JP2010272078A (en) System, and control unit of electronic information board, and cursor control method
WO2018235128A1 (en) Display editing apparatus, display editing method, and display editing program
JP2015045751A (en) Projection device
JP2004150918A (en) Map displaying method
JP2013080466A (en) Method and device for processing document image
JP7279113B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, COMPUTER PROGRAM
KR102250087B1 (en) Method and device for processing an image and recording medium thereof
JP2019160118A (en) Information provider and method for providing information
JPWO2015141214A1 (en) Label information processing apparatus for multi-viewpoint image and label information processing method
KR101566964B1 (en) Method of monitoring around view tracking moving object, attaratus performing the same and storage media storing the same
JP6371547B2 (en) Image processing apparatus, method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17914938

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019524719

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 17914938

Country of ref document: EP

Kind code of ref document: A1