JP5482740B2 - Presentation system, presentation device, and program - Google Patents

Presentation system, presentation device, and program Download PDF

Info

Publication number
JP5482740B2
JP5482740B2 JP2011153031A JP2011153031A JP5482740B2 JP 5482740 B2 JP5482740 B2 JP 5482740B2 JP 2011153031 A JP2011153031 A JP 2011153031A JP 2011153031 A JP2011153031 A JP 2011153031A JP 5482740 B2 JP5482740 B2 JP 5482740B2
Authority
JP
Japan
Prior art keywords
presentation
material
gesture
displayed
presentation system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2011153031A
Other languages
Japanese (ja)
Other versions
JP2013020434A (en
Inventor
武 森川
開拓 小澤
猛 南
大輔 崎山
和也 姉崎
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Priority to JP2011153031A priority Critical patent/JP5482740B2/en
Publication of JP2013020434A publication Critical patent/JP2013020434A/en
Application granted granted Critical
Publication of JP5482740B2 publication Critical patent/JP5482740B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations

Description

  The present invention relates to a presentation system and related technology.

  In a presentation apparatus or the like, there is a technology for detecting a gesture of a presenter (presenter) and advancing a presentation (see Patent Document 1).

JP 2010-205235 A

  However, in the technique described in Patent Document 1, the period for detecting the presenter's gesture is not specified, and it is unclear from which point the detection of the gesture starts. Therefore, in some cases, there is a possibility that the presenter's gesture performed before the presentation is erroneously detected.

  Accordingly, an object of the present invention is to provide a presentation system capable of avoiding erroneous detection of a gesture before the start of a presentation and a technology related thereto.

In order to solve the above-mentioned problem, the invention of claim 1 is a presentation system, a receiving means for accepting a presentation start instruction, and detecting a presenter's gesture in response to the start instruction, and the gesture Detecting means for detecting the orientation of the face of the presenter at the time of detection, and control means for controlling the distribution operation of the presentation material based on the detected content of the gesture, the control means comprising the face orientation The distribution destination of the presentation material is determined based on the above.

The invention of claim 2 is the presentation system according to the invention of claim 1 , wherein the control means has a condition that the orientation of the face is a first orientation from the presenter's location to the audience. The presentation material is distributed to a listener terminal which is a listener terminal.

According to a third aspect of the present invention, in the presentation system according to the first or second aspect of the present invention , the control means has the face direction from the presenter's location to the display surface of the output image by the display output unit. The presentation material is distributed to the display output unit on condition that the second direction is satisfied.

According to a fourth aspect of the present invention, in the presentation system according to any one of the first to third aspects of the present invention , the control means determines a material to be distributed based on the detected content.

According to a fifth aspect of the present invention, in the presentation system according to the fourth aspect of the invention , the control means determines which of the body material and the attached material is the distribution target material based on the detected content. Features.

A sixth aspect of the present invention is the presentation system according to any one of the first to third aspects, wherein the control means determines a distribution target page of the distribution target material based on the detected content. To do.

A seventh aspect of the present invention is the presentation system according to the sixth aspect of the present invention , wherein the control means selects any one of all pages in the distribution target material and some pages in the distribution target material as the distribution target page. It is determined based on the detected content.

According to an eighth aspect of the present invention, in the presentation system according to the sixth aspect of the present invention , the control means includes a page next to the displayed page displayed on the display surface of the output image by the display output means and the displayed page. Which of the previous page is set as the delivery target page is determined based on the detected content.

According to a ninth aspect of the present invention, in the presentation system according to the second aspect of the present invention , the control means is configured to select a specific one of the presentation materials on the condition that the first gesture is detected by the detecting means. All pages of data are distributed to the listener terminal.

According to a tenth aspect of the present invention, in the presentation system according to the ninth aspect of the present invention , the presentation material includes a text material and an attached material, and the control means displays when the first gesture is detected. When the text material is displayed on the display surface of the output image by the output unit, the data of all pages of the text material is distributed to the listener terminal, and when the attached material is displayed on the display surface, The data of all pages of the attached material is distributed to the listener terminal.

The invention of claim 11 is the presentation system according to the invention of claim 2 , wherein the control means is provided on the condition that a second gesture is detected by the detection means, The page data displayed on the display surface of the output image by the display output unit is distributed to the listener terminal.

According to a twelfth aspect of the present invention, in the presentation system according to the eleventh aspect of the present invention , the presentation material includes a text material and an attached material, and the control means detects the second gesture when the second gesture is detected. When the text material is displayed on the display surface, the data of the page displayed on the display surface among the plurality of pages of the text material is distributed to the audience terminal, and the attached material is displayed on the display surface. When displayed, the data of the page displayed on the display surface among the plurality of pages of the attached material is distributed to the listener terminal.

A thirteenth aspect of the present invention is the presentation system according to the second aspect of the invention , wherein the presentation material includes a text material and an attached material, and the control means detects the third gesture by the detection means. When the text material is displayed on the display surface of the output image by the display output unit, all of the attached material is distributed to the listener terminal, and when the attached material is displayed on the display surface, The entire text material is distributed to the listener terminal.

According to a fourteenth aspect of the present invention, in the presentation system according to the third aspect of the invention , the control means includes a plurality of pages of the presentation material, provided that a fourth gesture is detected by the detection means. Data of a page next to the page displayed on the display surface is distributed to the display output unit.

According to a fifteenth aspect of the present invention, in the presentation system according to the third aspect of the invention , the control means includes a plurality of pages of the presentation material, provided that a fifth gesture is detected by the detection means. Data of a page before the page displayed on the display surface is distributed to the display output unit.

According to a sixteenth aspect of the present invention, in the presentation system according to the third aspect of the invention , the presentation material includes a text material and an attached material, and the control means detects the sixth gesture by the detection means. When the text material is displayed on the display surface, the attached material is distributed to the display output unit, and when the attached material is displayed on the display surface, the text material is displayed on the display output unit. It is characterized by being delivered to.

The invention of claim 17 is a presentation system comprising: a photographing device for photographing a presenter; and a presentation device capable of communicating with the photographing device, wherein the presentation device issues a start instruction for presentation. Receiving means for receiving, detecting means for detecting a gesture of the presenter based on a photographed image by the photographing apparatus, and control means for controlling a distribution operation of the presentation material based on the detected content of the gesture. And the detection means starts detecting the presenter's gesture in response to the start instruction, detects a direction of the face of the presenter at the time of detecting the gesture, and the control means detects the face The distribution destination of the presentation material is determined based on the direction of the presentation material.

The invention of claim 18 is a presentation device, a receiving unit that receives an instruction to start the presentation, starts the detection of the announcement's gesture in response to the start instruction, the announcement at the time of detection of the gesture Detection means for detecting the orientation of the person's face, and control means for controlling the delivery operation of the presentation material based on the detected content of the gesture , wherein the control means is based on the orientation of the face. It is characterized by determining the distribution destination of the material.

The invention of claim 19, a) a step of accepting an instruction to start the presentation, b) the start instruction starts the detection of the announcement's gesture in response to the face of the presenter during the detection of the gesture A program for causing a computer to execute a step of detecting an orientation of the computer, and c) a step of distributing a presentation material based on the detected content of the gesture , wherein the step c) And determining a delivery destination of the presentation material based on the above.

According to the invention described in claims 1 to 19 , since the detection of the presenter's gesture is started in response to the instruction to start the presentation, it is possible to avoid erroneous detection of the gesture before the start of the presentation. Is possible. Further, since the distribution operation of the presentation material is controlled based on the detected content of the gesture, the presenter can also instruct the distribution of the presentation material by a simple operation using the gesture.

In addition, since the delivery destination of the presentation material is determined based on the direction of the face of the presenter, the presenter can easily determine the delivery destination.

It is a conceptual diagram which shows the outline | summary of the presentation system which concerns on embodiment. It is a block diagram which shows schematic structure of a presentation apparatus. It is a figure which shows the screen displayed on a touch screen. It is a figure which shows the screen displayed on a touch screen. It is a figure which shows the screen displayed on a touch screen. It is a figure which shows the screen displayed on a touch screen. It is a flowchart which shows operation | movement of a presentation apparatus. It is a flowchart which shows the delivery operation | movement to the listener terminal of presentation material. It is a flowchart which shows the delivery operation | movement to the projector of presentation material. It is a figure which shows the picked-up image by a camera. It is a figure which shows the picked-up image by a camera. It is a figure which shows the picked-up image by a camera. It is a figure which shows the image for a comparison used for a matching process. It is a figure which shows the image for a comparison used for a matching process. It is a figure which shows the image for a comparison used for a matching process. It is a figure which shows the picked-up image by a camera. It is a figure which shows the picked-up image by a camera. It is a figure which shows the picked-up image by a camera. It is a conceptual diagram which shows the page image displayed on a screen. It is a conceptual diagram which shows the page image displayed on a screen. It is a conceptual diagram which shows the page image displayed on a screen. It is a conceptual diagram which shows the page image displayed on a screen. It is a conceptual diagram which shows the page image displayed on a screen. It is a conceptual diagram which shows the page image displayed on a screen. It is a flowchart which shows the delivery operation | movement to the listener terminal of presentation material. It is a flowchart which shows the delivery operation | movement to the listener terminal of presentation material.

  Hereinafter, embodiments of the present invention will be described with reference to the drawings.

<1. System configuration>
FIG. 1 is a system configuration diagram showing an overview of the presentation system 100. This presentation system 100 is also referred to as an electronic conference system. The system 100 includes a presentation device 10, a camera 20, a display output device 30, and each audience terminal 70.

  The presentation device 10, the camera 20, the display output device 30, and each audience terminal 70 are connected to each other via a network NW, and can perform network communication. Here, the network NW is configured by a LAN, a WAN, the Internet, or the like. The connection form of each device to the network NW may be a wired connection or a wireless connection.

  The presentation device 10 is a device that manages presentation presentation material (also referred to as presentation material) BP, and is also referred to as a management device. The presentation device 10 stores the presentation material BP and controls the distribution operation of the presentation material BP to each distribution destination.

  Specifically, the presentation device 10 detects the gesture GT of the presenter PT based on the captured image of the camera 20 and the like, as will be described in detail later. When the gesture GT is detected, the presentation apparatus 10 displays the display data of the presentation material BP (specifically, a page image or the like), the display output apparatus 30 and the listener based on the detected content of the gesture GT. Delivered to the terminal 70 or the like.

  The presentation material BP includes a text material MP and an attached material SP. The text material MP is a material mainly used by the presenter PT in the progress of the presentation. The attached material SP is material used by the presenter PT to supplement the text material MP, and is also referred to as supplemental material.

  FIG. 1 also shows the inside of the conference room MR, which is the venue for the presentation. The left side of FIG. 1 corresponds to the front of the conference room MR, and the right side of FIG. 1 corresponds to the rear of the conference room MR. Each of the listeners UA to UD sits behind the conference room MR (right side in FIG. 1) and listens to the presentation by the presenter PT toward the screen SC. The screen SC is installed in front of the conference room MR (left side in FIG. 1), and an image output from the display output device 30 is projected on the screen SC. The presenter PT stands about 1 m (meters) to 2 m (meters) behind the screen SC (listener side) and on the right side of the screen SC as viewed from the listeners UA to UD, and then the screen SC or the listeners UA to UD. Announcement towards In short, the presenter PT makes a presentation near the screen SC.

  The camera 20 is arranged at a position where the presenter PT is photographed from the side of the presenter PT, and photographs a moving image of the presenter PT.

  The display output device 30 is a device that displays a page image of presentation material. For example, a projector device is used as the display output device 30. A listener of the presentation (listeners UA, UB, UC, UD,...) Can visually recognize the page image of the presentation material BP on the screen SC that is a display surface of the output image by the display output device 30.

  Each audience terminal 70A, 70B, 70C, 70D is connected to a presentation audience UA, UB, UC, UD,. . . It is a terminal device used by. As each listener terminal 70, a personal computer etc. are used, for example. The audience UA to UD can display and browse the presentation materials using the audience terminals 70A to 70D, respectively.

  In this embodiment, the presentation apparatus 10 is configured as an apparatus (image forming apparatus) having an image forming function, more specifically, an MFP (Multi-Functional Peripheral).

  FIG. 2 is a block diagram illustrating a schematic configuration of the presentation apparatus (MFP) 10.

  As shown in the functional block diagram of FIG. 2, the presentation apparatus (MFP) 10 includes an image reading unit 2, a print output unit 3, a communication unit 4, a storage unit 5, an input / output unit 6, a controller 9, and the like. Various functions are realized by operating these components in a complex manner.

  The image reading unit (scanner unit) 2 generates image data (also referred to as a scanned image) of the original by optical reading processing that optically reads the original placed at a predetermined position of the presentation device (MFP) 10. Is a processing unit. For example, the image reading unit 2 reads a document placed at a predetermined position by the presenter PT, and generates image data of the document as the presentation material BP.

  The print output unit 3 is an output unit that prints out an image on various media such as paper based on data related to a print target.

  The communication unit 4 is a processing unit capable of performing network communication via the communication network NW. In this network communication, various protocols such as TCP / IP (Transmission Control Protocol / Internet Protocol) and FTP (File Transfer Protocol) are used. By using the network communication, the presentation apparatus (MFP) 10 can exchange various data with a desired destination.

  The storage unit (storage unit) 5 includes a storage device such as a hard disk drive. The storage unit 5 stores the presentation material BP generated by the image reading unit 2 and the like.

  The input / output unit 6 includes an operation input unit 6a that receives an input to the presentation apparatus (MFP) 10 and a display unit 6b that displays and outputs various types of information. Specifically, the presentation apparatus 10 is provided with an operation panel 63 (see FIG. 1) and the like. The operation panel (touch screen) 63 is configured by embedding a piezoelectric sensor or the like in a liquid crystal display panel, and functions as a part of the display unit 6b and also functions as a part of the operation input unit 6a.

  The controller 9 is a control device that is built in the presentation apparatus (MFP) 10 and controls the presentation apparatus 10 in an integrated manner. The controller 9 is configured as a computer system including a CPU and various semiconductor memories (such as a RAM and a ROM). The controller 9 implements various processing units by executing a predetermined software program (hereinafter also simply referred to as a program) PG stored in a ROM (for example, EEPROM) in the CPU.

  Specifically, as shown in FIG. 2, the controller 9 includes a gesture detection unit 91 and a distribution operation control unit 93.

  The gesture detection unit 91 is a processing unit that detects a gesture of the presenter PT (hereinafter also referred to as a gesture GT) based on an image captured by the camera 20 or the like. As will be described later, the gesture detection unit 91 starts detecting the gesture GT in response to a presentation start instruction.

  The distribution operation control unit 93 is a processing unit that controls the distribution operation of the presentation material BP based on the detected content of the gesture GT.

<2. Operation>
Next, various operations in the presentation system 100 will be described.

  In the presentation system 100, when a start instruction from the presenter PT is received, the presentation is started. Simultaneously with the start of the presentation, the detection of the gesture GT of the presenter PT based on the image taken by the camera 20 or the like is also started.

  Further, when the gesture GT of the presenter PT is detected after the presentation is started, the presentation material BP is distributed to each distribution destination based on the detected content of the gesture GT. When the gesture GT is detected, the face direction of the presenter PT is also detected, and the distribution destination of the presentation material BP is determined based on the face direction.

  Hereinafter, such an operation will be described with reference to FIGS. Specifically, the operations of (1) the registration operation of the presentation material BP, (2) the start operation of the presentation, and (3) the distribution operation of the presentation material BP will be described in this order.

  (1) The presentation material BP is registered in advance in the presentation apparatus 10 by the user on the organizer side (here, the presenter PT) before the presentation is started. Hereinafter, the registration operation of the presentation material BP will be described in detail with reference to FIGS.

  First, the screen GA1 (see FIG. 3) is displayed on the touch screen 63 in accordance with a predetermined operation of the presenter PT. As shown in FIG. 3, three buttons BT1, BT2 and BT3 are displayed in the screen GA1 for each category (“shared”, “meeting”, “individual”). In this embodiment, a case where presentation presentation material BP or the like is registered in the box BX2 corresponding to the category “meeting” is illustrated.

  When the button BT2 in the screen GA1 is pressed, an operation screen GA2 (see FIG. 4) relating to the box BX2 is displayed on the touch screen 63. As shown in FIG. 4, six buttons BT21 to BT26 respectively corresponding to the six boxes BX21 to BX26 are displayed in the approximate center of the screen GA2. Further, buttons BN1 to BN3 for executing various operations are displayed on the right side of the screen GA2. The button BN1 is a button for executing printing. The button BN2 is a button for giving an instruction to start a meeting (presentation). The button BN3 is a button for registering the presentation material BP in each box BX21 to BX26.

  Here, a case where the presentation material BP is registered in the box BX21 of “Conference 1” is illustrated. First, when the button BT21 in the screen GA2 is selected and the button BN3 is pressed, the presentation apparatus 10 displays the screen GA3 (see FIG. 5) on the touch screen 63. As shown in FIG. 5, a message “Register text material. Set text material on the platen and press the start button.” Is displayed on the screen GA3.

  Here, when a document related to the text material MP is set on the document table and a start button (not shown) is pressed, the presentation apparatus 10 reads the document to generate the text material MP and stores it in the box BX21. When the text material MP is stored, the presentation apparatus 10 displays the screen GA4 (see FIG. 6) on the touch screen 63. As shown in FIG. 6, a message “Register the attached material. Set the attached material on the platen and press the start button” is displayed at the center of the screen GA4. .

  When the document related to the attachment material SP is set on the document table and a start button (not shown) is pressed, the presentation apparatus 10 reads the document to generate the attachment material SP and stores it in the box BX21. When the attached material SP is stored, the presentation apparatus 10 displays a display screen (not shown) on the touch screen 63 that displays that the body material MP and the attached material SP are registered in the box BX21.

  Here, the case where the document is read by the image reading unit 2 and the text material SP and the attached material SP are generated and registered in the box BX21 is illustrated, but the present invention is not limited to this. For example, various files stored in an external device may be acquired as the text material MP and the attachment material SP via the network NW and stored in the box BX21.

  (2) Next, the presentation start operation will be described with reference to FIG.

  First, the presenter PT causes the touch screen 63 to display the screen GA2 (see FIG. 4) again through a predetermined operation.

  Thereafter, the presenter PT selects the button BT21 corresponding to the box BX21 in which the body material MP and the attached material SP are registered (stored) in advance, and presses the button BN2 instructing the start of the presentation. . The presentation apparatus 10 accepts a pressing operation of the button BN2 by the presenter PT as a presentation start instruction.

  (3) Next, the distribution operation of the presentation material BP will be described with reference to the flowcharts of FIGS.

  When the presentation start instruction is accepted, the presentation device 10 distributes the data of the text material MP (specifically, the page image of the first page of the text material MP) to the display output device 30 in step S11 of FIG. The display output device 30 projects the page image of the first page of the distributed text material MP on the screen SC.

  In step S <b> 12, the presentation apparatus 10 starts photographing the presenter PT using the camera 20 and starts detecting the gesture GT of the presenter PT using the gesture detection unit 91. In this way, in response to the presentation start instruction, the gesture detection unit 91 starts detecting (monitoring) the gesture GT of the presenter PT.

  Gestures GT <b> 1 to GT <b> 6 are configured to include a common action CA in which the presenter PT once lifts his arm up to a position higher than just beside the face, and actions specific to each gesture following the common action CA. Here, in the common operation CA, it is assumed that the elbow of the presenter PT is lifted in a bent state, and the elbow is in a bent state even when the arm reaches a position higher than right next to the face. . That is, it is assumed that the arm of the presenter PT is not fully extended in the common operation CA. When an operation CA common to the gestures GT1 to GT6 is detected, it is determined that the gesture GT (any of the gestures GT1 to GT6) has been started. Furthermore, the type of gesture (which one of gestures GT1 to GT6 was performed) is detected by detecting the specific operation content subsequent to the common operation CA.

  Specifically, as shown in FIG. 10, the presentation device 10 starts monitoring the surrounding area SA including the presenter PT based on the captured image SG by the camera 20. During monitoring of the peripheral area SA, the gesture detection unit 91 performs a matching process between the captured image SG by the camera 20 and a comparison image HG (not shown). The comparison image HG is an image obtained in advance (before the presentation is started) when the presenter PT raises his arm. During monitoring of the peripheral area SA, for example, when the captured image SG1 (see FIG. 11) at a certain point matches the comparison image HG, the gesture detection unit 91 determines that the common operation CA has been executed. At the same time, the presentation device 10 extracts the outline of the presenter PT as a whole from the photographed image SG1, and further extracts the arm portion ST (see FIG. 11) of the presenter PT from the outline. Specifically, the presentation apparatus 10 extracts the body part from the outline of the entire presenter PT, and the part projecting (extending) upward (or obliquely upward) from the body part is the arm part ST of the presenter PT ( (See FIG. 11).

  Furthermore, the operation content subsequent to the common operation CA is also detected. Then, based on the common operation and the subsequent operation, it is determined that the gesture GT of the presenter PT has been detected and its type (specifically, gestures GT1 to GT6).

  As will be described later, the gesture GT1 is a gesture in which the presenter PT extends five fingers upward (raises all fingers) (see FIG. 13), and the gesture GT2 is a gesture in which the presenter PT has one finger. Is a gesture for extending (raising) upward (see FIG. 14). The gesture GT3 is a gesture in which the presenter PT extends (raises) three fingers upward (see FIG. 15). Furthermore, the gesture GT4 is a gesture for moving the arm raised by the presenter PT beside the face to the right (see FIG. 17), and the gesture GT5 is a gesture for moving the arm raised beside the face by the presenter PT to the left. (See FIG. 18). The gesture GT6 is a gesture for moving the arm raised by the presenter PT to the side of the face further upward (see FIG. 19).

  However, here, these gestures GT1 to GT6 are determined according to the detection result of the face direction of the presenter PT at the time of gesture detection, as will be described later. Specifically, the gestures GT1 to GT3 are detected on condition that the face direction of the presenter PT is “direction D1” (described later) (steps S14 and S15). The gestures GT4 to GT6 are detected on condition that the face direction of the presenter PT is “direction D2” (described later) (steps S16 and S17).

  In step S13 following step S12, it is determined whether or not the gesture GT of the presenter PT has been detected. If it is determined that the gesture GT of the presenter PT has been detected, the process proceeds to step S14.

  In step S14, it is determined whether or not the orientation of the face of the presenter PT when the gesture GT is detected is “direction D1”. Here, the direction D1 is a direction from the location of the presenter PT toward the listeners UA to UD, that is, a “direction toward the listener”. In other words, in step S14, it is determined whether or not the gesture GT of the presenter PT is performed toward the listeners UA to UD.

  Specifically, the presentation device 10 performs a matching process between the captured image SG1 (see FIG. 11) and the comparison image IG1 (not shown). Note that the comparative image IG (not shown) is a still image obtained by photographing the presenter PT from the side of the presenter PT with the camera 20 in advance, with the face facing the seat position of the listener.

  As a result of the matching process between the photographed image SG1 and the image IG1, if it is determined that the face direction of the presenter PT at the time of detecting the gesture GT is the direction D1, the process proceeds to step S15. In step S15, a process of distributing the presentation material BP to the listener terminal 70 (see FIG. 8) is executed. The process of distributing the presentation material BP to the listener terminal 70 will be described in detail later.

  On the other hand, as a result of the matching process between the captured image SG1 and the image IG1, if it is determined that the face direction of the presenter PT is not the direction D1 when the gesture GT is detected, the process proceeds to step S16.

  In step S16, it is determined whether or not the orientation of the face of the presenter PT when the gesture GT is detected is “direction D2.” Here, the direction D2 is a direction from the location of the presenter PT toward the screen SC, that is, a “direction toward the screen”. In other words, in step S16, it is determined whether or not the gesture GT of the presenter PT is performed toward the screen SC.

  Specifically, the presentation device 10 performs a matching process between the captured image SG1 (see FIG. 11) and the comparison image IG2 (not shown). Note that the comparison image IG2 (not shown) is a still image obtained by photographing the state in which the presenter PT faces the screen SC from the side of the presenter PT in advance with the camera 20.

  As a result of the matching process between the photographed image SG1 and the image IG2, if it is determined that the face direction of the presenter PT at the time of detecting the gesture GT is the direction D2, the process proceeds to step S17. In step S17, the distribution process of the presentation material BP to the display output device 30 (see FIG. 9) is executed. The process of distributing the presentation material BP to the display output device 30 will also be described in detail later.

  On the other hand, as a result of the matching process between the captured image SG1 and the image IG2, if it is determined that the face orientation at the time of detecting the gesture GT is not the orientation D2, the process proceeds to step S18.

  In step S18, it is determined whether or not to end the presentation. If it is determined that the presentation is to be terminated, the process is terminated; otherwise, the process returns to step S13.

  Next, the distribution process (step S15 in FIG. 7) of the presentation material BP to the listener terminals 70 (70A to 70D) will be described with reference to the flowchart in FIG.

  First, in step S51 of FIG. 8, it is determined whether or not a text material MP (specifically, a page image of the text material MP) is displayed on the screen SC. If it is determined that the text material MP is displayed on the screen SC, the process proceeds to step S52. On the other hand, if it is determined that the text material MP is not displayed on the screen SC, the process of the flowchart of FIG. 8 (the process of step S15 of FIG. 7) is terminated, and the process proceeds to step S16 of FIG.

  In step S52, it is determined whether or not the presenter PT has performed a gesture GT for raising all fingers (hereinafter also referred to as gesture GT1). Specifically, it is determined whether or not the gesture GT1 has been performed by performing a matching process between the partial image BG (see FIG. 12) and the comparison image JG1 (see FIG. 13). As shown in FIG. 12, the partial image BG is an image near the tip of the arm. Specifically, the partial image BG is a constant region around the tip of the portion ST extending from the body of the presenter PT (for example, converted into a real space distance And a circle region having a radius of 15 cm (centimeter). Further, as shown in FIG. 13, the comparison image JG1 is a still image obtained by capturing beforehand a state in which the presenter PT raises all fingers.

  If it is determined as a result of the matching process between the partial image BG and the comparison image JG1 that the gesture GT1 has been performed (the presenter PT raises all fingers), the process proceeds to step S53. On the other hand, if it is determined that the gesture GT1 has not been performed, the process proceeds to step S54.

  In step S53, the presentation apparatus 10 distributes all the text material MP (specifically, data of all pages of the text material MP) to each listener terminal 70 (70A to 70D). Thereby, the listeners UA to UD can appropriately browse all pages of the text material MP using the respective listener terminals 70A to 70D.

  In step S54, it is determined whether or not the presenter PT has performed a gesture GT for raising one finger (hereinafter also referred to as gesture GT2). Specifically, it is determined whether or not the gesture GT2 has been performed by performing a matching process between the partial image BG (see FIG. 12) and the comparison image JG2 (see FIG. 14). As illustrated in FIG. 14, the comparison image JG2 is an image obtained by capturing beforehand a state in which the presenter PT raises one finger.

  If it is determined that the gesture GT2 has been performed as a result of the matching process between the partial image BG and the comparison image JG2 (the presenter PT raises one finger), the process proceeds to step S55. On the other hand, if it is determined that the gesture GT2 has not been performed, the process proceeds to step S56.

  In step S55, the presentation apparatus 10 transmits a part of the text material MP (specifically, data of a page displayed on the screen SC among a plurality of pages of the text material MP) to each listener terminal 70 (70A). To 70D). Thereby, the listeners UA to UD can browse the pages displayed on the screen SC using the respective listener terminals 70A to 70D.

  In step S56, it is determined whether or not the presenter PT has performed a gesture GT for raising three fingers (hereinafter also referred to as gesture GT3). Specifically, it is determined whether or not the gesture GT3 has been performed by performing a matching process between the partial image BG (see FIG. 12) and the comparison image JG3 (see FIG. 15). As shown in FIG. 15, the comparative image JG3 is an image obtained by photographing in advance a state where the presenter PT raises three fingers.

  If it is determined that the gesture GT3 has been performed as a result of the matching process between the partial image BG and the comparison image JG3 (the presenter PT has raised three fingers), the process proceeds to step S57. On the other hand, if it is determined that the gesture GT3 has not been performed, the process of the flowchart in FIG. 8 (step S15 in FIG. 7) ends, and the process proceeds to step S16 in FIG.

  In step S57, the presentation apparatus 10 distributes all of the attachment material SP (specifically, data of all pages of the attachment material SP) to each listener terminal 70 (70A to 70D). Thereby, the listeners UA to UD can appropriately browse all the pages of the attachment SP using the respective listener terminals 70A to 70D.

  Next, the distribution process of the presentation material BP to the display output device 30 (step S17 in FIG. 7) will be described with reference to the flowchart in FIG.

  In step S71, as shown in FIG. 17, it is determined whether or not a gesture GT (hereinafter also referred to as gesture GT4) for moving the arm raised by the presenter PT to the right is performed.

  Specifically, it is determined whether or not the gesture GT4 has been performed by comparing the image SG2 (see FIG. 16) and the image TG1 (see FIG. 17). Note that the image SG2 is a captured image when the presenter PT performs the common operation CA. In this image SG2, the face of the presenter PT faces the direction D2. The image TG1 is a captured image after a predetermined time (for example, after 2 seconds) from the time of capturing the image SG2.

  Here, as shown in FIG. 17, the arm position of the presenter PT related to the image TG1 is on the right side (left side in the image SG2) as viewed from the presenter PT rather than the arm position of the presenter PT related to the image SG2. Is present, it is determined that the gesture GT4 has been performed; otherwise, it is determined that the gesture GT4 has not been performed. Here, the gesture GT4 is detected on condition that the face direction of the presenter PT is the direction D2. However, the present invention is not limited to this, and the gesture GT4 may be detected without the condition that the face direction of the presenter PT is the direction D2. Specifically, whether or not the gesture GT4 has been performed is determined by comparing the image SG2 with the image TG1 regardless of whether or not the face of the presenter PT is the direction D2. Good.

  If it is determined that the gesture GT4 has been performed, the process proceeds to step S72, where the data of the next page NP of the page projected on the screen SC is distributed to the display output device 30, and the screen SC includes the data of the next page NP. A page image is projected.

  For example, as shown in FIG. 20, when the page image of the second page of the text material MP is projected on the screen SC, the presenter PT determines that the gesture GT4 has been performed. In this case, as shown in FIG. 21, a page image of the third page corresponding to the next page NP of the second page of the text material MP being projected is newly projected on the screen SC.

  On the other hand, if it is determined that the gesture GT4 is not performed, the process proceeds to step S73. In step S <b> 73, it is determined whether or not a gesture GT (hereinafter also referred to as gesture GT <b> 5) (see FIG. 18) for moving the arm raised to the left by the presenter PT to the left is performed.

  Specifically, it is determined whether or not the gesture GT5 has been performed by comparing the image SG2 (see FIG. 16) and the image TG2 (see FIG. 18). Note that the image TG2 is a captured image after a predetermined time (for example, after 2 seconds) from the time of capturing the image SG2.

  Here, as shown in FIG. 18, the position of the arm of the presenter PT related to the image TG2 is the left side as viewed from the presenter PT rather than the position of the arm of the presenter PT related to the image SG2 (right side in the image SG2). Is present, it is determined that the gesture GT5 has been performed. Otherwise, it is determined that the gesture GT5 has not been performed.

  If it is determined that the gesture GT5 has been performed, the process proceeds to step S74, and the data of the page PP before the page projected on the screen SC is distributed to the display output device 30, and the screen SC includes the data of the previous page PP. A page image is projected.

  For example, as shown in FIG. 21, when the page image of the third page of the text material MP is projected on the screen SC, the presenter PT determines that the gesture GT5 has been performed. In this case, as shown in FIG. 20, the page image of the second page corresponding to the page before the third page of the text material MP being projected is newly projected on the screen SC.

  On the other hand, if it is determined that the gesture GT5 has not been performed, the process proceeds to step S75.

  In step S75, it is determined whether or not a gesture GT (hereinafter also referred to as gesture GT6) (see FIG. 19) for moving the arm raised by the presenter PT beside the face upward has been performed.

  Specifically, it is determined whether or not the gesture GT6 has been performed by comparing the image SG2 (see FIG. 16) and the image TG3 (see FIG. 19). As described above, the common operation CA is an operation in which the presenter PT lifts the arm to a position higher than right next to the face with the elbow bent. The gesture GT6 is an operation following the common operation CA, and specifically, an operation of lifting the arm lifted in the common operation CA to a higher position. The image SG2 is a captured image at the time when the presenter PT performs the common operation CA, and the image TG3 is a captured image after a predetermined time (for example, 2 seconds) after the image SG2 is captured.

  As shown in FIG. 19, when the position of the arm of the presenter PT related to the image TG3 is above the position of the arm of the presenter PT related to the image SG2, it is determined that the gesture GT6 has been performed. In other cases, it is determined that the gesture GT6 is not performed.

  If it is determined that the gesture GT6 has been performed, the process proceeds to step S75. If it is determined that the gesture GT has not been performed, the process of the flowchart of FIG. 9 (the process of step S17 of FIG. 7) is terminated. The process proceeds to step S18.

  In step S75, an operation of switching the display target material on the display output device 30 between the text material MP and the attached material SP is performed. Specifically, when the page image of the text material MP is projected on the screen SC, the page image of the attached material SP is distributed to the display output device 30, and the page image of the attached material SP is projected on the screen SC. Is done. On the other hand, when the page image of the attached material SP is projected on the screen SC, the page image of the text material MP is distributed to the display output device 30, and the page image of the text material MP is projected on the screen SC. .

  For example, assume that the page image of the third page of the text material MP is displayed on the screen SC as shown in FIG. Here, when the presenter PT performs the first gesture GT6, the page image of the first page of the attachment SP is distributed to the display output device 30 and displayed on the screen SC as shown in FIG. Step S76 in FIG. 9).

  Thereafter, when the gesture GT4 is performed by the presenter PT during the display of the page image of the first page of the attachment SP, the page image of the second page of the attachment SP is displayed and output as shown in FIG. 30 and displayed on the screen SC (step S72 in FIG. 9).

  Furthermore, when the second gesture GT6 is performed by the presenter PT during the display of the second page of the attachment SP, as shown in FIG. 21, the main text MP displayed before the attachment SP is displayed. The page image of the third page is delivered to the display output device 30 and displayed again on the screen SC.

  Thereafter, when the gesture GT4 is performed by the presenter PT during the display of the third page of the text material MP, the page image of the fourth page of the text material MP is distributed to the display output device 30 as shown in FIG. And displayed on the screen SC (step S72 in FIG. 9).

  Further, when the third gesture GT6 is performed by the presenter PT during the display of the fourth page of the text material MP, as shown in FIG. 23, the attached material SP displayed before the text material MP is displayed. The page image of the second page is distributed to the display output device 30 and displayed again on the screen SC (step S76 in FIG. 9).

  According to the above operation, the detection of the gesture GT of the presenter PT is started in response to the presentation start instruction (step S12 in FIG. 7), so that erroneous detection of the gesture GT before the start of the presentation is avoided. Is possible.

  Further, since the delivery destination of the presentation material BP is determined based on the face orientation of the presenter PT when the gesture GT is detected (see steps S14, S16, etc.), the presenter PT easily determines the delivery destination. Is possible.

  In particular, when it is determined that the face direction is the direction D1 (the direction from the location of the presenter PT toward the listeners UA to UD), the distribution destination of the presentation material BP is determined to the listener terminals 70A to 70D. In other words, the face direction (direction D1) of the presenter PT and the distribution destination (listener terminals 70A to 70D) of the presentation material BP are closely related. Therefore, the presenter PT can intuitively recognize the relationship between the orientation of the face when performing the gesture GT and the distribution destination of the presentation material BP.

  Similarly, when it is determined that the face direction is the direction D2 (the direction from the location where the presenter PT is located toward the screen SC), the distribution destination of the presentation material BP is determined by the display output device 30. In other words, the face direction (direction D2) of the presenter PT and the distribution destination (display output device 30) of the presentation material BP are closely related. Therefore, the presenter PT can intuitively recognize the relationship between the orientation of the face when performing the gesture GT and the distribution destination of the presentation material BP.

  Further, in the above operation, the delivery operation of the presentation material BP is controlled according to the gestures GT1 to GT6. Therefore, the presenter PT instructs the delivery of the presentation material BP by a simple operation using the gestures GT1 to GT6. It is possible.

  Specifically, based on the detection contents of the gesture GT (specifically, which of gestures GT1 and GT2 is detected), all pages in the distribution target material and some pages (single page) in the distribution target material Is determined as a delivery target page (step S15). Therefore, the presenter PT can use the gestures GT1 and GT2 (specifically, in detail) to display all (all pages) of the specific material (here, the main material MP) in the presentation material BP and the specific material. It is possible to switch between a part (a single page being displayed) and a delivery target page.

  Further, based on the detected content of the gesture GT (specifically, which of gestures GT1, GT2, and GT3 is detected), it is determined which of the body material MP and the attached material SP is to be distributed. The Therefore, the presenter PT changes which material (for example, the attached material MP) of the body material MP and the attached material SP is the distribution target material by using (using differently) the gestures GT1, GT2, and GT3. can do.

  In particular, the presenter PT uses the gesture GT3 to instruct all (all pages) of the non-displayed material (for example, the attached material MP) of the body material MP and the attached material SP as distribution target materials. You can also.

  Further, the distribution target material (and consequently the display target material on the screen SC) is changed based on the detection content of the gesture GT (specifically, the gesture GT6 is detected). Therefore, the presenter PT can switch the material to be distributed by using the gesture GT6, and thus can switch which of the body material MP and the attached material SP is displayed on the screen SC.

  Further, based on the detection content of the gesture GT (specifically, which of gestures GT4 and GT5 is detected), the next page NP of the displayed page displayed on the screen SC and the previous displayed page. Which page is selected as a delivery target page. Therefore, the presenter PT can use the gestures GT4 and GT5 (use differently) to instruct the operation of changing the displayed page of the displayed material (for example, the text material MP) in the presentation material BP.

  In the above-described embodiment, the types of gestures GT1 to GT6 are determined based on both the unique action (arm and / or finger action) following the common action CA and the face orientation of the presenter PT. Therefore, it is possible to improve the accuracy of discrimination of the gestures GT1 to GT6 as compared with the case where the above six types of gestures GT1 to GT6 are discriminated without considering the face orientation.

<6. Modified example>
Although the embodiments of the present invention have been described above, the present invention is not limited to the contents described above.

  For example, in the above-described embodiment, the case where the presentation material BP is distributed to the listener terminal 70 on condition that the text material MP is displayed is illustrated, but the present invention is not limited thereto, and the attached material SP is displayed. The presentation material BP may be distributed to the listener terminal 70 on the condition that it is present. Hereinafter, such a modification will be described in more detail with reference to FIG.

  In this modification, step S15 in FIG. 25 is executed instead of step S15 in FIG.

  First, in step S91 in FIG. 25, it is determined whether or not the attached material SP (specifically, the page image of the attached material SP) is displayed on the screen SC. If it is determined that the attached material SP is displayed on the screen SC, the process proceeds to step S92. On the other hand, if it is determined that the attached material SP is not displayed on the screen SC, the process of the flowchart of FIG. 25 (the process of step S15 of FIG. 7) is terminated, and the process proceeds to step S16 of FIG.

  In step S92, it is determined whether or not the gesture GT1 has been performed. If it is determined that the gesture GT1 has been performed (the presenter PT raises all fingers), the process proceeds to step S93. On the other hand, if it is determined that the gesture GT1 has not been performed (the presenter PT has not raised all fingers), the process proceeds to step S94.

  In step S93, the presentation apparatus 10 distributes all of the attached material SP (specifically, data of all pages of the attached material SP) to each listener terminal 70 (70A to 70D). Thereby, the listeners UA to UD can appropriately browse all the pages of the attachment SP using the respective listener terminals 70A to 70D.

  In step S94, it is determined whether or not the gesture GT2 has been performed. If it is determined that the gesture GT2 has been performed (the presenter PT raises one finger), the process proceeds to step S95. On the other hand, if it is determined that the gesture GT2 has not been performed (the presenter PT has not raised one finger), the process proceeds to step S96.

  In step S95, the presentation apparatus 10 transmits a part of the attachment SP (specifically, data of a page displayed on the screen SC among a plurality of pages of the attachment SP) to each listener terminal 70 (70A). To 70D). Thereby, the listeners UA to UD can browse the page of the attachment SP displayed on the screen SC using the respective listener terminals 70A to 70D.

  In step S96, it is determined whether or not the gesture GT3 has been performed. If it is determined that the gesture GT3 has been performed (the presenter PT raises three fingers), the process proceeds to step S97. On the other hand, if it is determined that the gesture GT3 has not been performed (the presenter PT has not raised three fingers), the process of the flowchart in FIG. 25 (step S15 in FIG. 7) is terminated, and step S16 in FIG. Proceed to

  In step S97, the presentation apparatus 10 distributes all the text material MP (specifically, data of all pages of the text material MP) to each listener terminal 70 (70A to 70D). Thereby, the listeners UA to UD can appropriately browse all pages of the text material MP using the respective listener terminals 70A to 70D.

  Moreover, in this modification (FIG. 25), the case where the delivery operation | movement corresponding to gesture GT1-GT3 is performed on condition that the attachment SP is displayed on the screen SC is illustrated, and the said embodiment (FIG. 8). In the example, the distribution operation corresponding to the gestures GT1 to GT3 is performed on the condition that the text material MP is displayed on the screen SC. However, the present invention is not limited to this, and when either the attached material SP or the text material MP is displayed on the screen SC, the distribution operation according to the material type displayed on the screen SC is performed. You may be made to be. In short, the operation of FIG. 25 and the operation of FIG. 8 may be executed in combination.

  More specifically, as shown in FIG. 26, when the gesture GT1 is detected, all of the materials (text material MP or attached material SP) displayed on the screen SC out of the text material MP and the attached material SP. (All pages) data may be distributed (step S103). Similarly, when the gesture GT2 is detected, a part of the material (text material MP or attached material SP) displayed on the screen SC among the text material MP and the attached material SP (only the page being displayed) is displayed. Data may be distributed (step S105). When gesture GT3 is detected, all (all pages) data of materials different from the material (text material MP or attached material SP) being displayed on the screen SC may be distributed. (Step S107). Specifically, when the gesture GT3 is detected, when the text material MP is displayed on the screen SC, all (all pages) data of the other attached material SP is distributed, and the attached material SP is displayed on the screen SC. When displayed, all (all pages) data of the other text material MP may be distributed.

  Moreover, although the case where gesture GT1-GT6 is discriminate | determined (identified) based on direction of the face of presenter PT is illustrated in the said embodiment etc., it is not limited to this. For example, a plurality of gestures GT are discriminated (identified) from each other based only on the unique action (arm and / or finger action) following the common action CA without considering the face orientation of the presenter PT. Also good. For example, the plurality of types of gestures GT1 to GT6 (or other types of gestures) may be determined without considering the face direction of the presenter PT. In this case, the target data may be distributed to a predetermined distribution destination for each gesture. For example, the listener terminals 70 (70A to 70D) may be determined in advance as the distribution destination for the gestures GT1 to GT3, and the display output device 30 may be determined in advance as the distribution destination for the gestures GT4 to GT6.

  Further, in the above-described embodiment, the case where the type of distribution target material (text material MP / attached material SP) and the distribution target page (all pages / single page) are changed according to the type of gesture GT is exemplified. However, it is not limited to this. For example, when a single gesture GT10 (for example, a gesture composed of only the common operation CA) is detected, predetermined data (for example, data of a page being displayed on a material being displayed on the screen SC) is distributed. You may do it. In this case, as in the above embodiment, the delivery destination may be changed according to the orientation of the face of the presenter PT when the gesture GT10 is executed.

BP presentation material HG image for comparison GT1-GT6 gesture JG1-JG3 image for comparison
MP Text material SA Peripheral area SP Attached material

Claims (19)

  1. A presentation system,
    A receiving means for receiving a presentation start instruction;
    Detecting means for detecting the presenter's gesture in response to the start instruction, and detecting the face direction of the presenter at the time of detecting the gesture ;
    Control means for controlling the delivery operation of the presentation material based on the detected content of the gesture;
    Equipped with a,
    The said control means determines the delivery destination of the said presentation material based on the direction of the said face, The presentation system characterized by the above-mentioned .
  2. The presentation system according to claim 1,
    The control means distributes the presentation material to a listener terminal that is a listener's terminal on the condition that the orientation of the face is a first direction from the presenter's location to the listener. A featured presentation system.
  3. The presentation system according to claim 1 or 2,
    The control means distributes the presentation material to the display output unit on the condition that the orientation of the face is a second direction from the location of the presenter toward the display surface of the output image by the display output unit. A presentation system characterized by this.
  4. The presentation system according to any one of claims 1 to 3 ,
    The presentation system according to claim 1, wherein the control means determines a material to be distributed based on the detected content.
  5. The presentation system according to claim 4, wherein
    The presentation system according to claim 1, wherein the control means determines which of the body material and the attached material is the distribution target material based on the detected content.
  6. The presentation system according to any one of claims 1 to 3,
    The said control means determines the delivery target page of a delivery target material based on the said detection content, The presentation system characterized by the above-mentioned.
  7. The presentation system according to claim 6,
    The control means determines, based on the detected contents, which one of all pages in the distribution target material or some of the pages in the distribution target material is the distribution target page. system.
  8. The presentation system according to claim 6,
    The control unit is configured to detect whether the page to be distributed is the next page of the displayed page displayed on the display surface of the output image by the display output unit or the previous page of the displayed page. A presentation system characterized by being determined based on
  9. The presentation system according to claim 2,
    The control means delivers the data of all pages of a specific material among the presentation materials to the listener terminal on condition that the first gesture is detected by the detecting means. system.
  10. The presentation system according to claim 9, wherein
    The presentation material includes text material and attached material,
    The control means, when the first gesture is detected,
    When the text material is displayed on the display surface of the output image by the display output unit, the data of all pages of the text material is distributed to the listener terminal,
    A presentation system, wherein when the attached material is displayed on the display surface, data of all pages of the attached material is distributed to the listener terminal.
  11. The presentation system according to claim 2,
    The control means, on the condition that the second gesture is detected by the detection means, out of a plurality of pages of the presentation material, the data of the page displayed on the display surface of the output image by the display output unit A presentation system for delivering to the listener terminal.
  12. The presentation system according to claim 11, wherein
    The presentation material includes text material and attached material,
    In the case where the second gesture is detected, the control means,
    When the text material is displayed on the display surface, the data of the page displayed on the display surface among a plurality of pages of the text material is distributed to the listener terminal,
    A presentation system, wherein when the attached material is displayed on the display surface, data of a page displayed on the display surface among a plurality of pages of the attached material is distributed to the listener terminal.
  13. The presentation system according to claim 2,
    The presentation material includes text material and attached material,
    In the case where the third gesture is detected by the detection unit, the control unit is configured to display all the attached material on the listener terminal when the text material is displayed on the display surface of the output image by the display output unit. Delivered to
    A presentation system, wherein when the attached material is displayed on the display surface, the whole of the text material is distributed to the listener terminal.
  14. The presentation system according to claim 3.
    The control means displays, on the condition that a fourth gesture is detected by the detection means, data of a page next to a page displayed on the display surface among a plurality of pages of the presentation material. A presentation system that is distributed to an output unit.
  15. The presentation system according to claim 3.
    The control means displays, on the condition that the fifth gesture is detected by the detection means, the data of the page before the page displayed on the display surface among the plurality of pages of the presentation material. A presentation system that is distributed to an output unit.
  16. The presentation system according to claim 3.
    The presentation material includes text material and attached material,
    The control means, when the sixth gesture is detected by the detection means, when the text material is displayed on the display surface, distributes the attached material to the display output unit,
    A presentation system, wherein when the attached material is displayed on the display surface, the text material is distributed to the display output unit.
  17. A presentation system,
    A photographing device for photographing the presenter;
    A presentation device capable of communicating with the imaging device;
    With
    The presentation device includes:
    A receiving means for receiving a presentation start instruction;
    Detecting means for detecting a gesture of the presenter based on a photographed image by the photographing device;
    Control means for controlling the distribution operation of the presentation material based on the detected content of the gesture;
    Have
    The detection means starts detecting the presenter's gesture in response to the start instruction, and detects the face direction of the presenter at the time of detecting the gesture,
    The said control means determines the delivery destination of the said presentation material based on the direction of the said face, The presentation system characterized by the above-mentioned .
  18. A presentation device,
    A receiving means for receiving a presentation start instruction;
    It starts the detection of the gesture of announcement's in response to the start instruction, and detection means for detecting the orientation of the face of the presenter during the detection of the gesture,
    Control means for controlling the distribution operation of the presentation material based on the detected content of the gesture;
    Equipped with a,
    The presentation device according to claim 1, wherein the control unit determines a delivery destination of the presentation material based on the orientation of the face .
  19. a) receiving a presentation start instruction;
    b) starts the detection of the announcement's gesture in response to the start instruction, the steps of detecting the orientation of the face of the presenter during the detection of the gesture,
    c) distributing the presentation material based on the detected content of the gesture;
    A program for causing a computer to execute the,
    The step c) includes a step of determining a distribution destination of the presentation material based on the orientation of the face.
JP2011153031A 2011-07-11 2011-07-11 Presentation system, presentation device, and program Active JP5482740B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011153031A JP5482740B2 (en) 2011-07-11 2011-07-11 Presentation system, presentation device, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011153031A JP5482740B2 (en) 2011-07-11 2011-07-11 Presentation system, presentation device, and program
US13/530,915 US9740291B2 (en) 2011-07-11 2012-06-22 Presentation system, presentation apparatus, and computer-readable recording medium

Publications (2)

Publication Number Publication Date
JP2013020434A JP2013020434A (en) 2013-01-31
JP5482740B2 true JP5482740B2 (en) 2014-05-07

Family

ID=47519678

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011153031A Active JP5482740B2 (en) 2011-07-11 2011-07-11 Presentation system, presentation device, and program

Country Status (2)

Country Link
US (1) US9740291B2 (en)
JP (1) JP5482740B2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5482740B2 (en) * 2011-07-11 2014-05-07 コニカミノルタ株式会社 Presentation system, presentation device, and program
JP6010062B2 (en) * 2014-03-17 2016-10-19 京セラドキュメントソリューションズ株式会社 Cue point control device and cue point control program
KR20150130808A (en) * 2014-05-14 2015-11-24 삼성전자주식회사 Method and apparatus of identifying spatial gesture of user
US9891803B2 (en) * 2014-11-13 2018-02-13 Google Llc Simplified projection of content from computer or mobile devices into appropriate videoconferences
CN107010491B (en) * 2017-04-24 2018-08-14 芜湖市海联机械设备有限公司 A kind of electric cable reel

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5923337A (en) * 1996-04-23 1999-07-13 Image Link Co., Ltd. Systems and methods for communicating through computer animated images
GB9906305D0 (en) * 1999-03-18 1999-05-12 Bolero International Limited Transaction support system
US6346933B1 (en) * 1999-09-21 2002-02-12 Seiko Epson Corporation Interactive display presentation system
US20030191805A1 (en) * 2002-02-11 2003-10-09 Seymour William Brian Methods, apparatus, and systems for on-line seminars
US8949382B2 (en) * 2003-02-26 2015-02-03 Siemens Industry, Inc. Systems, devices, and methods for network wizards
US7379560B2 (en) * 2003-03-05 2008-05-27 Intel Corporation Method and apparatus for monitoring human attention in dynamic power management
JP2004314855A (en) * 2003-04-17 2004-11-11 Sumitomo Electric Ind Ltd Apparatus operation control method and apparatus operation control system
JP2005063092A (en) * 2003-08-11 2005-03-10 Keio Gijuku Hand pattern switch device
US20060192775A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Using detected visual cues to change computer system operating states
JP2006312347A (en) * 2005-05-06 2006-11-16 Nissan Motor Co Ltd Command input device
KR100776801B1 (en) * 2006-07-19 2007-11-19 한국전자통신연구원 Gesture recognition method and system in picture process system
US8160056B2 (en) * 2006-09-08 2012-04-17 At&T Intellectual Property Ii, Lp Systems, devices, and methods for network routing
US7770115B2 (en) * 2006-11-07 2010-08-03 Polycom, Inc. System and method for controlling presentations and videoconferences using hand motions
US20110025818A1 (en) * 2006-11-07 2011-02-03 Jonathan Gallmeier System and Method for Controlling Presentations and Videoconferences Using Hand Motions
US8011583B2 (en) * 2007-07-02 2011-09-06 Microscan Systems, Inc. Systems, devices, and/or methods for managing data matrix lighting
JP5207513B2 (en) * 2007-08-02 2013-06-12 公立大学法人首都大学東京 Control device operation gesture recognition device, control device operation gesture recognition system, and control device operation gesture recognition program
US8918657B2 (en) * 2008-09-08 2014-12-23 Virginia Tech Intellectual Properties Systems, devices, and/or methods for managing energy usage
JP5183398B2 (en) * 2008-09-29 2013-04-17 株式会社日立製作所 Input device
US8436789B2 (en) * 2009-01-16 2013-05-07 Microsoft Corporation Surface puck
JP5115499B2 (en) * 2009-03-06 2013-01-09 セイコーエプソン株式会社 Presentation device
US20100306249A1 (en) * 2009-05-27 2010-12-02 James Hill Social network systems and methods
TW201214196A (en) * 2010-09-23 2012-04-01 Hon Hai Prec Ind Co Ltd Interactive display system
US8599106B2 (en) * 2010-10-01 2013-12-03 Z124 Dual screen application behaviour
WO2013000125A1 (en) * 2011-06-28 2013-01-03 Nokia Corporation Method and apparatus for live video sharing with multimodal modes
JP5482740B2 (en) * 2011-07-11 2014-05-07 コニカミノルタ株式会社 Presentation system, presentation device, and program
US9317175B1 (en) * 2013-09-24 2016-04-19 Amazon Technologies, Inc. Integration of an independent three-dimensional rendering engine
US9503687B2 (en) * 2015-03-24 2016-11-22 Fuji Xerox Co., Ltd. Personalized meeting event capture using egocentric tracking in smart spaces

Also Published As

Publication number Publication date
US20130019178A1 (en) 2013-01-17
JP2013020434A (en) 2013-01-31
US9740291B2 (en) 2017-08-22

Similar Documents

Publication Publication Date Title
JP4059027B2 (en) Printing condition setting method of the printer and printer
US20110078573A1 (en) Terminal apparatus, server apparatus, display control method, and program
US20090315847A1 (en) Input apparatus having touch panel operation accepting method, and operation accepting program embodied on computer readable medium
US7920159B2 (en) Electronic conference system, electronic conference controller, information terminal device, and electronic conference support method
US20110231783A1 (en) Information processing apparatus, information processing method, and program
TWI486864B (en) Mobile equipment with display function
JP2000298544A (en) Input/output device and its method
CN102725723A (en) Using a gesture to transfer an object across multiple multi-touch devices
JP2000043484A (en) Electronic whiteboard system
JP2010224726A (en) Information processor, information processing method, and information processing program
US9172905B2 (en) Mobile device and method for messenger-based video call service
CN101435981A (en) Projector and method for projecting image
US20100149206A1 (en) Data distribution system, data distribution apparatus, data distribution method and recording medium, improving user convenience
JP2011008768A (en) Instruction input device, instruction input method, program, and recording medium thereof
CN102037437A (en) Information processing device, information processing method, recording medium, and integrated circuit
JP2009055110A (en) Image recorder and its control method
CN103365653A (en) Information providing device, image forming device, and transmission system
US9430631B2 (en) Connection control device establishing connection between portable type mobile terminal and information processing device by wireless communication
US9866712B2 (en) Operation console, and electronic device and image processing apparatus provided with the operation console
JP6094550B2 (en) Information processing apparatus and program
CN1976240A (en) Mobile terminal and program therefor
CN103019554A (en) Command recognition method and electronic device using same
CN102694942B (en) Image processing apparatus, method for displaying operation manner, and method for displaying screen
CN102033702A (en) The display control apparatus and method for image display
US20110234518A1 (en) Operation console, electronic device and image processing apparatus provided with the operation console, and method of displaying information on the operation console

Legal Events

Date Code Title Description
A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20130418

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20130619

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20131009

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20131022

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20131219

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140121

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20140203

R150 Certificate of patent or registration of utility model

Ref document number: 5482740

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150