JP2009088868A - Photographed image processing system and display image creation program - Google Patents

Photographed image processing system and display image creation program Download PDF

Info

Publication number
JP2009088868A
JP2009088868A JP2007254665A JP2007254665A JP2009088868A JP 2009088868 A JP2009088868 A JP 2009088868A JP 2007254665 A JP2007254665 A JP 2007254665A JP 2007254665 A JP2007254665 A JP 2007254665A JP 2009088868 A JP2009088868 A JP 2009088868A
Authority
JP
Japan
Prior art keywords
indicator
image
captured image
camera
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2007254665A
Other languages
Japanese (ja)
Other versions
JP4854033B2 (en
Inventor
Katsuyoshi Komatsu
克佳 小松
Original Assignee
Sky Kk
Sky株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sky Kk, Sky株式会社 filed Critical Sky Kk
Priority to JP2007254665A priority Critical patent/JP4854033B2/en
Publication of JP2009088868A publication Critical patent/JP2009088868A/en
Application granted granted Critical
Publication of JP4854033B2 publication Critical patent/JP4854033B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An arbitrary area of a subject can be displayed on a monitor from a photographed image without optically controlling a camera.
A photographed image processing system comprising an image processing apparatus for transferring a display image generated by processing a photographed image acquired by a camera that photographs a subject to a monitor. The image processing apparatus 1 includes a pointer region detection unit 20 that detects a pointer region, which is a region of the pointer 4a disposed between the subject and the camera, from a captured image acquired by the camera, and detection information of the pointer region. 3D position calculation unit 30 for calculating the three-dimensional position of the indicator using, a specific unit 40A for specifying a specific partial captured image in the captured image based on the three-dimensional position of the pointer, and the specific partial captured image Is displayed as a display image, and a display image transfer unit 50 that transfers the display image to a monitor is provided.
[Selection] Figure 2

Description

  The present invention relates to an image processing technique for transferring a display image generated by appropriately processing a captured image acquired by a camera that captures a subject to a monitor.

  2. Description of the Related Art Conventionally, in an apparatus that captures and displays a subject, a technique for controlling the display form of the subject by a user giving some instruction has been studied. For example, in order to determine a rectangular area having a designated two points as a diagonal line as an attention area by specifying a document or the like as an object and specifying two points on the object with an instruction pen, and to enlarge and display the attention area to a predetermined size. A device for controlling pan, tilt, zoom and the like of a camera is also known (Patent Document 1). Also, when shooting the subject, the remote control of the subject (person) is shot together with the subject, and the camera is at least within the shooting screen so that the light emitting portion is in a predetermined position on the shooting screen. A technique for controlling (zooming) the angle of view is also known (Patent Document 2). According to this apparatus, the person who is the subject can change the angle of view of the camera by moving the remote control in his / her hand up / down / left / right when shooting. Furthermore, a digital copying machine (Patent Document 3) capable of enlarging / reducing and copying only a marker-designated area of a document, or irradiating a predetermined shape with a laser pointer on an image projected by a projector, An apparatus that recognizes the shape and controls the projection image of the projector (Patent Document 4) is also known.

JP 08-125921 (paragraph number 0010-0011, FIG. 1) JP 2003-289464 (Abstract, FIG. 3) JP2004-221898 (paragraph number 0006) JP 2004-078682 (Summary, FIG. 1)

  However, in the technique of Patent Document 1, it is necessary to designate two points again to change the shooting position, and the operation becomes complicated when the display area is frequently changed. In addition, a remote control system for the camera is required. The technique of Patent Document 2 only adjusts the angle of view of the camera so that the light emitting unit of the remote control that the subject on the shooting screen has is located at a predetermined position on the shooting screen, and the shooting of one point related to the subject is performed. Even if the position on the screen can be controlled, it is irrelevant to the technique of freely displaying one area related to the subject, and a remote control system for the camera optical mechanism is required. Since the technique of Patent Document 3 directly instructs a manuscript with a marker, an intuitive operation can be performed. However, since the writing is directly performed on the document, the instruction area cannot be easily changed, and there is a problem in operability. Further, the technique of Patent Document 4 has a problem that it is necessary to control the projection optical system by recognizing the irradiation figure by the laser pointer on the projection image of the projector, and the recognition accuracy is easily affected by the environment. .

  In view of the above situation, an object of the present invention is to provide an image that allows an arbitrary area of a subject to be freely displayed on a monitor from a photographed image obtained by photographing the subject without optically controlling the camera. To provide processing technology.

  In order to solve the above problems, a captured image processing system according to the present invention includes a camera that captures a subject and an image processing device that processes a captured image acquired by the camera and transfers a display image generated to the monitor. A pointer disposed at an arbitrary position, a pointer region detection unit that detects a pointer region that is a region of the pointer from a captured image acquired by the camera, and at least using detection information of the pointer region A pointer three-dimensional position calculation unit that calculates a three-dimensional position of the pointer, a specific unit that specifies a specific partial captured image in the captured image based on the three-dimensional position of the pointer, and the display of the specific partial captured image The display image generating unit generates an image, and the display image transfer unit transfers the display image to the monitor.

  In the captured image processing system configured as described above, the operator can place the indicator at an arbitrary position, and when the indicator is placed between the subject and the camera, the three-dimensional position of the indicator is calculated. Based on the three-dimensional position of the indicator, a specific area in the captured image is displayed on the monitor as a specific partial captured image. When the specific partial captured image is displayed on the monitor, the specific partial captured image is converted into a display image suitable for display characteristics such as the display resolution and display size of the monitor or the image display area set on the monitor screen. Based on the X-Y coordinate position of the indicator, the cutout reference position of the specific partial shot image in the shot image is determined, and on the basis of the Z coordinate position of the pointer, the cutout size of the specific partial shot image is determined and cut out from the shot image. The specific partial captured image is displayed on the monitor as a display image. That is, when the XY coordinate position of the indicator changes by moving the indicator, the position of the specific portion captured image taken out from the photographed image changes, and when the Z coordinate position of the indicator changes, the specific portion taken out from the photographed image. The size of the captured image changes. Therefore, according to the present invention, the operator can display the subject image of the desired area of the subject on the monitor by bringing the indicator between the subject and the camera to the desired position.

  In one preferred configuration of the indicator three-dimensional position calculation unit, a Z-axis coordinate along a camera optical axis of the indicator is calculated from a size of the indicator region in the captured image, and the Z-axis coordinate is calculated. The three-dimensional position of the pointer is obtained by calculating the XY axis coordinates in the cross section of the optical axis of the camera from the position of the pointer area in the captured image. In this configuration of the indicator three-dimensional position calculation unit, the three-dimensional position (XYZ axis coordinates) of the indicator can be obtained only by image analysis of the captured image acquired by the camera. It is effective for conversion.

  Of course, there are many devices that measure the position of an object, and rangefinders that measure the distance to an object are non-contact and highly accurate, such as laser rangefinders and ultrasonic rangefinders. In particular, it can be used to measure the distance between the camera and the indicator. In one such embodiment, the indicator is further provided with a pointer distance acquisition unit that measures the distance between the camera and the indicator with a rangefinder and acquires the indicator distance, and the indicator The three-dimensional position calculation unit calculates a Z coordinate along the camera optical axis of the indicator from the indicator distance, and calculates an XY axis coordinate in a cross section of the camera optical axis in the Z axis coordinate in the captured image. The position is calculated from the position of the indicator region. In this configuration, the display area of the subject on the monitor, and in fact, the Z-axis coordinate of the indicator that determines the monitor display enlargement ratio of the subject can be performed at high speed and accurately, and the operation of enlargement / reduction display becomes smooth.

  The technical features of the photographic image processing system according to the present invention described above can also be applied to the photographic display method used in this system and the display image generation program used in the image processing apparatus of this system. For example, image processing for processing a captured image obtained by photographing the subject and the indicator with the camera in a state where an indicator is arranged between the subject and the camera and transferring a display image generated to the monitor A display image generation program according to the present invention for an apparatus includes a function of recording the captured image sent from the camera in a memory, and an indicator area that is an area of the indicator from the captured image acquired by the camera. Based on the three-dimensional position of the indicator from the photographed image developed in the memory, and the function of calculating the three-dimensional position of the indicator using at least the detection information of the indicator region. A computer is caused to realize a function of cutting out the specified specific captured image and a function of generating the specific partial captured image as the display image. Naturally, such a display image generation program can also obtain the operation and effect described in the above-described captured image processing system, and can also incorporate some additional techniques described above as an example of the embodiment.

  Furthermore, in order to solve the above-described problem, the captured image processing according to the present invention includes a camera and an image processing device that processes a captured subject image acquired by capturing the subject and transfers a display image generated to the monitor. The system includes a pointer disposed at an arbitrary position, a pointer region detection unit that detects a pointer region that is a region of the pointer from a captured image acquired by the camera, and the pointer region detection unit When the indicator region is not detected by the image recording unit, the image recording unit for recording the subject image, and when the indicator region is detected, at least the three-dimensional position of the indicator using the detection information of the indicator region An indicator three-dimensional position calculation unit for calculating a specific partial captured image in the subject captured image read from the image recording unit based on the three-dimensional position of the indicator. Specifying unit for, a display image generation unit which generates the specified portion captured image as the display image, and a display image transfer unit for transferring the display image on the monitor. In this captured image processing system, when an image recorded in the image recording unit is displayed on a monitor such as a liquid crystal display or a large screen, a display area in the image is displayed according to the position of the indicator located in front of the camera. Can be changed. Therefore, the operator can display a desired area of the photographed subject image on the monitor by bringing the indicator to a desired position in front of the camera. In addition, a part of the subject may not be captured from the camera due to the indicator overlapping the subject. In such a case, the subject image is recorded when the indicator is placed outside the camera shooting space, and the recorded subject image is used when the indicator is placed in the camera shooting space. Thus, the influence of the indicator can be eliminated by generating the display image.

  An example in which the captured image processing system according to the present invention is applied to a school education support system is shown in FIG. In the education site shown in FIG. 1, a teacher computer that functions as the image processing apparatus 1 according to the present invention and a USB camera (hereinafter simply referred to as a camera) connected to the image processing apparatus 1 are located in the vicinity of the teacher standing on the platform. 2) is arranged. The camera 2 takes a world map as a teaching material placed on a desk as a subject 3. The teacher holds the insertion rod 4 with an indicator, and a spherical indicator 4a is provided at the tip of the insertion rod 4 with the indicator. Each student's desk is provided with a student computer 6 connected to the image processing apparatus 1 through the network 5, and is photographed by the camera 2 on a liquid crystal display (an example of a monitor) 6 a of the student computer 6. An image processed by the image processing apparatus 1 can be displayed. Further, a projector 7 (an example of a monitor) is connected to the network 5, and an image photographed by the camera 2 and subjected to image processing by the image processing apparatus 1 can be displayed on a large screen 7 a placed on the front of the classroom. . The teacher's desk is also provided with a liquid crystal display (an example of a monitor) 1a connected to the teacher computer 1, and the teacher uses the liquid crystal display 6a and the projector 7 of the student computer 6 through the liquid crystal display 1a. The image displayed on 7a can be confirmed.

  In this photographed image processing system, when the indicator 4a of the insertion rod 4 with the indicator that the teacher is holding is placed in the field of view of the camera 2, the subject 3 is the XY axis plane and the camera optical axis is the Z axis. The area of the photographed image displayed on the liquid crystal display 6a or the large screen 7a of the student computer 6 varies depending on the three-dimensional position of the indicator 4a in the photographing space. That is, as will be described in detail below, the image processing apparatus 1 processes a captured image obtained by capturing the entire subject 3 by the camera 2 based on the three-dimensional position of the indicator 4a. Then, a display image showing the entire subject or an enlarged portion of the subject is generated, and the display image is displayed on the liquid crystal displays 1a and 6a functioning as a monitor, and further on the large screen 7a. That is, the XY axis coordinates of the indicator 4a in the imaging space of the camera 2 serve as a base point when the display image is extracted from the captured image, and the Z axis coordinate of the indicator 4a is an area in which the display image is extracted from the captured image for the display image. It is configured to determine the size. Therefore, when the teacher moves the indicator 4a from the right to the left above the world map as the subject 3, the world map displayed on the liquid crystal display 6a and the large screen 7a of the student computer 6 also moves from right to left. Is displayed. When the teacher moves the indicator 4a closer to the world map as the subject 3, the displayed world map is enlarged. When the teacher is moved away from the world map, the displayed world map is reduced. Is displayed. Of course, the relationship between the left-right movement and the left-right movement of the display on the world map, which is the subject 3 of the indicator 4a, and the relationship between the perspective movement with respect to the subject 3 and the enlargement / reduction of the display can be reversed.

  A first embodiment of the image processing apparatus 1 used in the above-described school education support system will be described with reference to the functional block diagram of FIG. The image processing apparatus 1 in this embodiment is configured to obtain the three-dimensional position (XYZ axis coordinates) of the indicator 4a by performing image processing on a captured image acquired by the camera 2. .

  The image processing apparatus 1 is configured by a general-purpose computer, and the functions thereof are realized by standard hardware, a standard installed OS, and a specially installed display image generation program. An important function of the image processing apparatus 1 is that a captured image acquired by the camera 2 is pre-processed by the image input unit 10, developed in the memory 11, subjected to necessary image processing, and displayed as a display image. Sending from the transfer unit 50 to the student side computer 6 and the projector 7 through the network 5. For this reason, the image processing apparatus 1 includes a pointer region detection unit 20 that detects a pointer region that is a region of the pointer 4a in the captured image from the captured image acquired by the camera 2, and the pointer region detection unit 20 The indicator three-dimensional position calculation unit 30 that calculates the three-dimensional position of the indicator 4a using the detection information of the indicator region given by A specifying unit 40A for specifying a captured image and a display image generating unit 40B for generating the specified partial captured image as the display image are substantially constructed through execution of software.

  The indicator region detection unit 20 captures images as shown in FIG. 4 which are acquired by the camera 2 and recorded in the memory 11 when the indicator 4a is arranged in an imaging space as schematically illustrated in FIG. From the image, the indicator region that is the image region of the indicator 4a is detected through an object detection algorithm. In order to easily detect the indicator region only from the luminance information, the indicator 4a may be made of a retroreflective material. In addition, the indicator 4a may be manufactured as a self-luminous body such as an LED, and an object that is an indicator of a display mode photographed by the camera including such a self-luminous body is referred to as an indicator 4a in the present invention. I have to.

  In order to calculate the three-dimensional position of the indicator 4a using the detection information output from the indicator region detector 20, the indicator three-dimensional position calculator 30 calculates the size (area) of the indicator region. At the same time, a Z-coordinate calculation unit 32 for calculating the Z-axis coordinate along the camera optical axis from the size of the indicator region, and a predetermined position such as the center of gravity of the indicator region are calculated and the predetermined position: (x0, y0) ) And the Z-axis coordinate, and an XY coordinate calculation unit 31 for calculating an XY-axis coordinate in the cross section of the camera optical axis in the Z-axis coordinate of the indicator 4a. If the optical characteristics of the camera 2 and the actual size of the indicator 4a are known, the Z coordinate calculation unit 32 can use the fact that the distance from the camera 2 can be obtained by the size in the captured image. It is convenient to measure the position on the Z-axis and the size of the captured image in advance and form a table or a function. For example, in the case of functioning, if the size of the indicator region is S, the Z-axis coordinate: z can be easily obtained by z = f (S). In addition, when the optical characteristics of the camera 2 are known, the XY coordinate calculation unit 31 calculates the above-described Z axis coordinate: z and the predetermined position of the indicator region in the captured image: (x0, y0). It is possible to obtain XY axis coordinates (x = g (x0, y0, z), y = h (x0, y0, z)) in the cross section of the camera optical axis in the Z axis coordinates. The three-dimensional position calculation unit 30 can obtain the three-dimensional position: (x, y, z) of the indicator 4a from the detection information and give it to the specifying unit 40A.

  The specifying unit 40A is configured to use the captured image developed in the memory 11 based on the three-dimensional position of the indicator 4a given from the indicator three-dimensional position calculation unit 30, and the liquid crystal display 6a of the student computer 6 and the large screen 7a. A specific partial captured image to be displayed on the monitor is specified. For this purpose, the specifying unit 40A includes a cut-out information generation unit 41 that generates cut-out information including the position and size of the specific partial captured image, and a cut-out information storage unit 42 that temporarily stores the cut-out information. The display image generation unit 40B generates a display image based on the cutout information received from the specifying unit 40A. Therefore, the display image generation unit 40B extracts the specific partial captured image from the captured image on the memory 11 based on the cutout information directly received from the cutout information generation unit 41 or the cutout information read from the cutout information storage unit 42. An image cutout unit 43 and an image conversion unit 44 that converts an image of the cut out specific partial captured image so as to be compatible with a display image on a monitor such as a liquid crystal display 6a or a large screen 7a of the student computer 6 are provided. .

  The cut-out information generation unit 41 has a three-dimensional position: values (x, y, z) x, y (units are dots) as schematically shown in FIG. 5 where the subject 3 is a map of Japan. A position offset by a predetermined amount: Δx, Δy from a point determined by (x + Δx, y + Δy) is taken as a reference point P, and a three-dimensional position: derived by a value z (unit: cm) of (x, y, z). Numerical values: Cutout information is generated with m and n (in units of dots) as the size of the specific partial shot image cut out from the shot image, that is, the cutout size m × n. The specific partial captured image has a rectangular shape in which horizontal × vertical is m × n with the reference point P as a base point. Further, the offset amount can be arbitrarily set. When the offset is zero, a predetermined position of the indicator region becomes a cut-out reference point. The relationship between the value z and the numerical value: n and m is an inversely proportional relationship in this embodiment. In FIG. 5A, z = 20 cm, m = 450, and n = 300 in size. In FIG. 5B, z = 10 cm, m = 900, and n = 600 are shown. In FIG. 5C, z = 5 cm, m = 1800, and n = 1200. The size is shown. Therefore, as the indicator 4a moves away from the camera 2 and as a result, the closer the indicator 4a is to the subject 3, the larger the captured image of the subject 3 is displayed. The specific partial captured image cut out from the captured image on the memory 11 by the specific partial captured image cutout unit 43 based on the cutout information is converted into a liquid crystal display 6a or a large screen of the student side computer 6 in the image conversion unit 44. The image is output as a display image after being subjected to thinning processing, interpolation processing, or the like in accordance with the display screen characteristics of the monitor such as 7a. That is, since only a specific partial photographed image cut out from the photographed image is displayed on the monitor screen such as the liquid crystal display 6a or the large screen 7a of the student side computer 6, as can be understood from FIG. As the size of the partial captured image is smaller, the subject 3 that is magnified at a larger magnification is displayed. Since this specific partial photographed image is also displayed on the liquid crystal display 1a which is a monitor on the teacher side, the teacher moves the position of the indicator 4a while checking this monitor screen, so that only a desired region in the subject 3 is obtained. The enlarged image can be displayed on the liquid crystal display 6a or the large screen 7a of the student computer 6.

  Since the latest cutout information generated by the cutout information generation unit 41 is stored in the cutout information storage unit 42, if the pointer region detection unit 20 cannot detect the pointer region from the photographed image. The specific partial captured image cutout unit 43 is switched to cut out the specific partial captured image based on the cutout information read from the cutout information storage unit 42. When the indicator region is detected again, a release signal is generated, and the specific partial captured image cutout unit 43 is switched to cut out the specific partial captured image based on the cutout information received from the direct cutout information generation unit 41. . With this configuration, when the teacher wants to keep the currently displayed image as it is, it is only necessary to quickly move the indicator 4a out of the field of view of the camera 2. In addition, when it is desired to leave the current display image as it is regardless of whether or not the indicator region can be detected, the cutout information stored in the cutout information storage unit 42 is displayed by turning on the display holding switch 8. It is also possible to employ a configuration that temporarily fixes. In this case, in order to return to the conventional state, the display holding switch 8 may be turned OFF to forcibly generate a release signal for releasing the temporary holding of the cutout information in the cutout information storage unit 42. As the display holding switch 8, a specific key on the keyboard may be assigned. If the indicator 4a is not detected when the release signal is generated by deleting the cut-out information by the release signal, the captured image is displayed as it is. On the other hand, when the indicator 4a is detected, the indicator is displayed. The cutout information is reset by 4a, and the specific partial shot image cut out from the shot image based on this cutout information is displayed.

The flow of shooting display control in the shooting image processing system configured as described above will be described with reference to the shooting display routine shown in FIG.
First, the image input unit 10 captures a megapixel level captured image acquired by the camera 2 and develops it in the memory 11 (# 02). A pointer detection process in which preprocessing such as luminance conversion, tone conversion, and resolution conversion as necessary is performed so that a pointer region that is an image of the pointer 4a is easily detected from the photographed image developed in the memory 11. A business image is generated (# 04). A pointer region detection process is performed on the generated pointer detection processing image (# 06). When the indicator region is detected in the indicator region detection process (# 08 Yes branch), the indicator three-dimensional position calculation unit 30 obtains a predetermined position (x, y) of the indicator region and its size: S ( # 10) Further, the Z axis coordinate: z of the indicator 4a is calculated from the obtained size: S, and the three-dimensional position coordinates (x, y, z) of the indicator 4a are determined (# 12).

  The cutout information creation unit 41 obtains a cutout reference point P from the captured image of the specific partial captured image based on the x value and the y value of the determined three-dimensional position coordinates (x, y, z) (# 14). Based on the z value, a cut size: m × n is obtained (# 16). Cutout information including the cutout reference point and cutout size is generated, the cutout information is given to the specific partial captured image cutout unit 43, and the cutout information is stored in the cutout information storage unit 42 (# 18). The specific partial shot image cutout unit 43 cuts out the specific partial shot image from the shot image recorded in the memory 11 (# 20) based on the read cutout information (# 19). The cut-out specific partial captured image is subjected to image conversion in accordance with the image gradation of the display monitor (# 22), and the liquid crystal display 1a which is the teacher side monitor and the liquid crystal display 6a of the student side computer 6 are displayed as display images. Or transferred to the large screen 7a and displayed there (# 24).

  When the indicator region is not detected by the indicator region detecting unit 20 (# 08 No branch), first, it is checked whether or not the display maintenance of the immediately preceding image is forcibly released due to the generation of the release signal, unless it is released (# 26 No branch), the cut-out information is read out from the cut-out information storage unit 42 (# 28), the process proceeds to step # 20, and the specific partial captured image is cut out. When the display maintenance of the immediately preceding image is canceled (# 26 Yes branch), the captured image developed in the memory 11 is converted into a display image as it is (# 30), and the process proceeds to step # 20.

  The processes of steps # 02 to # 30 described above are continued until a display end command is input, and the subject image in the region specified by the three-dimensional position of the indicator 4a of the insertion rod 4 that the teacher puts above the subject 3 is the student. It is displayed on the liquid crystal display 6a and the large screen 7a of the side computer 6, and the teaching materials are efficiently displayed to the students.

  In order to satisfy the desire to maintain the previous display image even though the indicator region is detected in step # 08, as shown in the flowchart of FIG. 7, added step # 09 is added. It is checked whether or not the display maintenance mode is further set, and if the display maintenance mode is set (# 09 Yes branch), the previous cutout information read from the cutout information storage unit 42 is temporarily buffered. After that, the cutout information buffered until the display maintenance mode is not set is read from this buffer (# 29), the process proceeds to step # 20, and the specific partial captured image cutout unit 43 receives the cutout information. It should be used. The setting and release of the display maintenance mode may be performed by the display holding switch 8 or the like, or a switch provided on the insertion rod 4 may be used.

  FIG. 8 shows a functional block diagram of the second embodiment of the image processing apparatus 1 used in the above-described school education support system. In the image processing apparatus 1 in this embodiment, the XY axis coordinates of the three-dimensional position (X, Y, Z axis coordinates) of the indicator 4a are obtained by performing image processing on a captured image acquired by the camera 2. The Z-axis coordinates are obtained by a distance meter 9 provided in the peripheral part of the camera 2. As the rangefinder 9, a non-contact rangefinder known as a laser rangefinder or an ultrasonic rangefinder is convenient, but the indicator 4a moves within a predetermined range in the XY axis direction. Therefore, it is preferable to have a wide ranging beam width or scan a ranging beam. The distance measurement data of the indicator 4a obtained by the distance meter 9 is transferred to the distance measurement data input unit 12 of the image processing apparatus 1, where it is converted into a predetermined data format, and the indicator three-dimensional position calculation unit 30 is converted. Given to. Accordingly, the Z coordinate calculation unit 32 in this embodiment has a function of simply reading the z value from the distance measurement data received from the distance measurement data input unit 12. The XY coordinate calculation unit 31 is the same as in the first embodiment, calculates a predetermined position of the indicator region in the detection information output from the indicator region detection unit 20, and indicates this predetermined position: (x, y) It is calculated as the XY axis coordinates of the body 4a. The configurations of the display image generation unit 40B and the display image transfer unit 50 are the same as those in the first embodiment.

  In the above-described embodiment, when the indicator region is not detected, a configuration is adopted in which the display image is maintained using the cutout information stored in the cutout information storage unit 42. Instead, the indicator is used. When the area is not detected, a configuration in which the captured image is used as it is without using the cutout information may be employed.

  In the above-described embodiment, the indicator 4a is captured in the same photographed image as the subject 3. Therefore, depending on the position of the indicator 4a, the image of the indicator 4a becomes an obstacle in the finally generated display image. There is a case. In order to avoid such a problem, the indicator 4a is made of a material that transmits visible light and reflects invisible light, and the camera 2 acquires a visible light image and an invisible light image. The pointer region detection unit 20 may be configured to obtain the three-dimensional position of the pointer 4a using the pointer detection processing image generated using the non-visible light image.

  FIG. 9 is a functional block diagram of another embodiment of the captured image processing system according to the present invention. This other embodiment is different from the previous embodiment in that the subject photographed image recorded in advance in the image recording unit 60 is enlarged and reduced based on the three-dimensional position of the indicator 4. That is, in this embodiment, the following processing is performed. When the indicator 4a is arranged outside the shooting space and the indicator 4a is not detected by the indicator region detection unit 20, the image shot by the camera 2 is a subject shot image (an image obtained by shooting only the subject). It is recorded in the image recording unit 60. On the other hand, when the indicator 4a is arranged in the photographing space and the indicator 4a is detected by the indicator region detector 20, the detection information of the indicator region is acquired and the following processing is performed. First, a photographed image from the camera 2 is transferred to the memory 11 through the image input unit 10 as a pointer photographed image (an image in which only the subject and the indicator 4a or the indicator 4a is photographed), and is recorded in the image recording unit 60. The captured subject image of the subject 3 is transferred from the image recording unit 60 to the memory 11 through the image input unit 10. The image recording unit 60 may be anything as long as it records a subject photographed image. Typical examples thereof include a DVD, a CD-ROM, a hard disk, and a semiconductor memory. The format of the subject photographed image recorded in the image recording unit 60 may be a still image or a moving image. Next, the three-dimensional position of the indicator 4a is calculated by the indicator three-dimensional position calculating unit 30 using the detection information of the indicator region acquired by the processing of the indicator region detecting unit 20 described above. Furthermore, based on the calculated three-dimensional position of the indicator 4a, the specific partial captured image is specified by the cutout information generated by the cutout information generation unit 41 of the specifying unit 40A. Further, the specific partial captured image clipping unit 43 of the display image generation unit 40B extracts a specific partial captured image from the subject captured image transferred to the memory 11 based on the clipping information, and the image conversion unit 44 converts the display image into a display image. Generated. The display image generated in this way is transferred to the monitor by the display image transfer unit 50. In this embodiment, when an image recorded in the image recording unit 60 is displayed on the liquid crystal display 6a or the large screen 7a, the display area in the image is displayed according to the position of the indicator 4a located in front of the camera 2. As a result, the scaled display state of the image is determined.

  As a method for obtaining the three-dimensional position of the indicator 4a, a three-dimensional position calculation method using a stereo image used in the field of computer vision may be employed in addition to the method described above. In other words, in addition to the camera 2 aiming at the subject 3, another camera is prepared, and the three-dimensional position of the indicator 4a is calculated using these two captured images as stereo images. A configuration for obtaining the three-dimensional position of the indicator 4a can be realized by being incorporated in the body three-dimensional position calculation unit 30.

  In the above-described embodiment, an example in which the captured image processing system according to the present invention is applied to a school education support system has been described. Of course, the captured image processing system is not limited to a school education place, and displays an image. It can be applied to various fields.

Overview of overview when the image processing system according to the present invention is applied to a school education support system Functional block diagram of the image processing apparatus 1 used in the school education support system Schematic diagram of the shooting space Explanatory drawing explaining an example of the picked-up image developed to memory Explanatory drawing explaining the relationship between the position of an indicator and the specific partial picked-up image cut out from a picked-up image Flow chart showing an example of the flow of shooting display control Flowchart showing a variation of the flow of shooting display control Functional block diagram in the second embodiment of the image processing apparatus 1 used in the school education support system The functional block diagram of the image processing apparatus 1 in another embodiment of the picked-up image processing system by this invention.

Explanation of symbols

1: Image processing device (teacher computer)
2: Camera 3: Subject 4: Insertion rod 4a: Indicator 5: Network 6a: Student computer's liquid crystal display (monitor)
7: Projector (monitor)
8: Display holding switch 9: Distance meter 20: Indicator region detection unit 30: Indicator three-dimensional position calculation unit 31: XY coordinate calculation unit 32: Z coordinate calculation unit 40A: Identification unit 41: Cutout information generation unit 42: Cutout information storage unit 40B: Display image generation unit 43: Specific partial captured image cutout unit 44: Image conversion unit 50: Display image transfer unit 60: Image recording unit

Claims (5)

  1. In a captured image processing system comprising a camera that captures a subject and an image processing device that processes a captured image acquired by the camera and transfers a display image generated to the monitor.
    Indicator,
    A pointer region detection unit that detects a pointer region that is a region of the pointer from a captured image acquired by the camera;
    A pointer three-dimensional position calculator that calculates a three-dimensional position of the pointer using at least detection information of the pointer region;
    A specifying unit that specifies a specific partial captured image in the captured image based on a three-dimensional position of the indicator;
    A photographic image processing system comprising: a display image generation unit that generates the specific partial captured image as the display image; and a display image transfer unit that transfers the display image to the monitor.
  2. The indicator three-dimensional position calculation unit calculates a Z-axis coordinate along the camera optical axis of the indicator from the size of the indicator region in the captured image, and the camera optical axis cross section at the Z-axis coordinate. 2. The photographed image processing system according to claim 1, wherein the X-Y axis coordinates are calculated from the position of the indicator region in the photographed image.
  3. An indicator distance acquisition unit that measures a distance between the camera and the indicator with a rangefinder and acquires an indicator distance is further provided, and the indicator three-dimensional position calculation unit includes the indicator The Z coordinate along the camera optical axis of the indicator is calculated from the distance, and the XY axis coordinate in the cross section of the camera optical axis in the Z axis coordinate is calculated from the position of the indicator region in the captured image. The captured image processing system according to claim 1.
  4. A captured image processing system for processing a captured image obtained by capturing the subject and the indicator with the camera in a state where an indicator is disposed between the subject and the camera and transferring a display image generated to the monitor. In the display image generation program for
    A function of recording the captured image sent from the camera in a memory;
    A function of detecting a pointer region that is a region of the pointer from a captured image acquired by the camera;
    A function of calculating a three-dimensional position of the indicator using at least detection information of the indicator region;
    A function of cutting out a specific partial photographed image identified based on the three-dimensional position of the indicator from the photographed image developed in the memory;
    A function of generating the specific partial captured image as the display image;
    Display image generation program for causing a computer to realize the above.
  5. In a captured image processing system comprising a camera and an image processing apparatus that processes a captured subject image obtained by capturing a subject and transfers a display image generated to the monitor,
    Indicator,
    A pointer region detection unit that detects a pointer region that is a region of the pointer from a captured image acquired by the camera;
    An image recording unit that records the subject captured image when the indicator region is not detected by the indicator region detection unit;
    A pointer three-dimensional position calculation unit that calculates a three-dimensional position of the pointer using at least detection information of the pointer region when the pointer region is detected;
    A specifying unit for specifying a specific partial captured image in the subject captured image read from the image recording unit based on the three-dimensional position of the indicator;
    A photographic image processing system comprising: a display image generation unit that generates the specific partial captured image as the display image; and a display image transfer unit that transfers the display image to the monitor.
JP2007254665A 2007-09-28 2007-09-28 Captured image processing system and display image generation program Active JP4854033B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007254665A JP4854033B2 (en) 2007-09-28 2007-09-28 Captured image processing system and display image generation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007254665A JP4854033B2 (en) 2007-09-28 2007-09-28 Captured image processing system and display image generation program

Publications (2)

Publication Number Publication Date
JP2009088868A true JP2009088868A (en) 2009-04-23
JP4854033B2 JP4854033B2 (en) 2012-01-11

Family

ID=40661721

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007254665A Active JP4854033B2 (en) 2007-09-28 2007-09-28 Captured image processing system and display image generation program

Country Status (1)

Country Link
JP (1) JP4854033B2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08125921A (en) * 1994-10-26 1996-05-17 Canon Inc Writing-and-drawing camera apparatus
JP2003259183A (en) * 2002-03-04 2003-09-12 Hitachi Ltd Presentation system
JP2005123707A (en) * 2003-10-14 2005-05-12 Casio Comput Co Ltd Image projection apparatus and image projection system, and display image generating apparatus and display image generating method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08125921A (en) * 1994-10-26 1996-05-17 Canon Inc Writing-and-drawing camera apparatus
JP2003259183A (en) * 2002-03-04 2003-09-12 Hitachi Ltd Presentation system
JP2005123707A (en) * 2003-10-14 2005-05-12 Casio Comput Co Ltd Image projection apparatus and image projection system, and display image generating apparatus and display image generating method

Also Published As

Publication number Publication date
JP4854033B2 (en) 2012-01-11

Similar Documents

Publication Publication Date Title
US9734609B2 (en) Transprojection of geometry data
EP2287708B1 (en) Image recognizing apparatus, operation determination method, and program
JP5538667B2 (en) Position / orientation measuring apparatus and control method thereof
US4829373A (en) Stereo mensuration apparatus
JP4958497B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, mixed reality presentation system, computer program, and storage medium
JP3885458B2 (en) Projected image calibration method and apparatus, and machine-readable medium
CN101730876B (en) Pointing device using camera and outputting mark
DE60205662T2 (en) Apparatus and method for calculating a position of a display
JP5762892B2 (en) Information display system, information display method, and information display program
US10217288B2 (en) Method for representing points of interest in a view of a real environment on a mobile device and mobile device therefor
US7176904B2 (en) Information input/output apparatus, information input/output control method, and computer product
US7176881B2 (en) Presentation system, material presenting device, and photographing device for presentation
GB2564794A (en) Image-stitching for dimensioning
JP4689380B2 (en) Information processing method and apparatus
KR100835759B1 (en) Image projector, inclination angle detection method, and projection image correction method
DE60127644T2 (en) Teaching device for a robot
JP4701424B2 (en) Image recognition apparatus, operation determination method, and program
JP3509652B2 (en) Projector device
US8350896B2 (en) Terminal apparatus, display control method, and display control program
CN101533312B (en) Auto-aligning touch system and method
JP4627781B2 (en) Coordinate input / detection device and electronic blackboard system
JP2012212345A (en) Terminal device, object control method and program
KR100869447B1 (en) Apparatus and method for indicating a target by image processing without three-dimensional modeling
JP3830956B1 (en) Information output device
US7027041B2 (en) Presentation system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100127

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20111007

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20111020

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20111021

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141104

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

Ref document number: 4854033

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141104

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250