KR101663240B1 - Lighting board system - Google Patents

Lighting board system Download PDF

Info

Publication number
KR101663240B1
KR101663240B1 KR1020150083671A KR20150083671A KR101663240B1 KR 101663240 B1 KR101663240 B1 KR 101663240B1 KR 1020150083671 A KR1020150083671 A KR 1020150083671A KR 20150083671 A KR20150083671 A KR 20150083671A KR 101663240 B1 KR101663240 B1 KR 101663240B1
Authority
KR
South Korea
Prior art keywords
unit
image
data
user
image data
Prior art date
Application number
KR1020150083671A
Other languages
Korean (ko)
Inventor
조인제
조훈제
Original Assignee
주식회사 테라클
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 테라클 filed Critical 주식회사 테라클
Priority to KR1020150083671A priority Critical patent/KR101663240B1/en
Application granted granted Critical
Publication of KR101663240B1 publication Critical patent/KR101663240B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention relates to a lighting board system comprising: a board unit having a first surface for enabling an input by a users input tool, a second surface facing the first surface in a first direction, and a third surface extending to connect the edges of the first surface and the second surface and transmitting light in the first direction; a first lighting unit facing at least a portion of the third surface and emitting light in a second direction; a camera unit facing the second surface and having an imaging direction formed in a second direction perpendicular to the first direction; a mirror unit positioned in front of the imaging direction of the camera unit and inclined to the first and second directions; and a first image generation unit electrically connected to the camera unit and generating a first image layer by using first image data generated by the camera unit. The lighting board system of the present invention can provide a high-quality lecture video without a separate editing process.

Description

Lighting board system

To a lighting board system.

As the Internet - based communication system has spread, the education system has been extended to lectures at schools and academies and lectures using internet images. However, in the case of such a video lecture, most of the lecture users take lectures directly. In this case, since the board is located behind the user when the user writes, the user must turn to the board Therefore, it is difficult for the viewer to sufficiently determine the contents of the publication, and the quality of the image is deteriorated.

In addition, when a user uses separate educational materials such as a moving picture or a graphic other than a writing board, a separate editing process is required after the lecture. In this case, since the screen is cut off from the user, .

SUMMARY OF THE INVENTION The present invention has been made to solve the above problems and / or problems, and it is an object of the present invention to provide a lighting board system capable of providing a high-quality image of a steel without any editing process.

In order to solve the above-mentioned problems, problems and / or limitations, it is an object of the present invention to provide an information processing apparatus and a method therefor, which are capable of being input by a user's input tool, a second surface opposed to the first surface in a first direction, A board unit including a third surface extending to connect an edge of a second surface and adapted to transmit light along the first direction; a board unit facing the at least a portion of the third surface, A camera unit which is located in front of the imaging direction of the camera unit and is located in the second direction opposite to the second direction and which is located in a second direction perpendicular to the first direction, A mirror unit arranged to be inclined in one direction and in a second direction, and a second image layer electrically connected to the camera unit and configured to generate a first image layer with first image data generated from the camera unit The first board lighting system is provided that includes an image generating unit.

The system includes a video reproduction unit for reproducing second video data different from the first video data, a second video generation unit configured to generate a second video layer from the second video data transmitted from the video reproduction unit, And a third image generation unit for generating a third image layer by combining the first image layer and the second image layer so as to overlap with each other.

The system may further include a second lighting unit positioned to face the first surface across a user and to illuminate the first portion of the user.

The system may further include a third lighting unit positioned to face the second lighting unit across a user and configured to irradiate light toward the second portion of the user other than the first portion.

The system may further include a fourth lighting unit located on both sides of the board unit in the second direction and configured to irradiate light toward the third portion of the user other than the first and second portions .

The system may further include an absorber unit positioned to face the first surface with the second lighting unit interposed therebetween and configured to absorb light.

The system may further comprise a voice analysis unit for analyzing the voice of the user to adjust the second image data.

The speech analysis unit includes: a speech data generation unit that generates first speech data from a user's speech; a speech data selection unit that selects second speech data from the first speech data; A first comparing unit which is electrically connected to the video reproducing unit and compares the second video data with the reference audio data and generates a first adjusting signal for adjusting the second video data when the second audio data matches the reference audio data, And an adjustment signal generation unit.

The system may further include an image analysis unit for analyzing the image for the user and adjusting the second image data.

Wherein the image analysis unit comprises: a video data selection unit for selecting third video data from an image for a user; an image comparison unit for comparing the third video data with reference video data; And a second adjustment signal generating unit for generating a second adjustment signal for adjusting the second image data when the third image data coincides with the reference image data.

According to the embodiments, the user can literally write on the front of his / her own without having to turn around, and the contents can be clearly captured on the video screen, so that the viewer can clearly and clearly It becomes possible to obtain a moving image of a high-quality lecture.

In addition, educational materials such as video and graphics can be provided stereoscopically during lectures.

The lecture effect can be further enhanced since the operation of the educational material can be controlled only by the user's specific voice or motion.

FIG. 1 is a configuration diagram schematically showing a configuration of a lighting board system according to an embodiment.
2 is a perspective view showing the board unit of the embodiment of FIG.
3 is a partial cross-sectional view showing the board unit and the first lighting unit of the embodiment of FIG.
4 is a configuration diagram showing another embodiment of the camera unit.
5 is a configuration diagram showing an embodiment of the control unit.
FIG. 6 is a diagram showing a state in which the first image layer and the second image layer are overlapped.
FIG. 7 is a configuration diagram schematically showing the configuration of a lighting board system according to another embodiment.
8 is a cross-sectional view of VIII-VIII of Fig.
9 is a block diagram showing an embodiment of a speech analysis unit.
FIG. 10 is a block diagram illustrating a speech analysis process according to an embodiment.
11 is a configuration diagram showing an embodiment of an image analysis unit.
12 is a block diagram illustrating an image analysis process according to an embodiment.
13 is a configuration diagram showing an embodiment of the data analysis unit.
FIG. 14 is a block diagram illustrating a data analysis process according to an embodiment.

Embodiments are capable of various transformations, and specific embodiments are illustrated in the drawings and described in detail in the detailed description. The effects and features of the embodiments, and how to accomplish them, will be apparent with reference to the following detailed description together with the drawings. However, the embodiments are not limited to the embodiments described below, but may be implemented in various forms.

Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or corresponding elements throughout the drawings, and a duplicate description thereof will be omitted.

In the following embodiments, the terms first, second, and the like are used for the purpose of distinguishing one element from another element, not the limitative meaning.

In the following examples, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise.

In the following embodiments, terms such as inclusive or having mean that a feature or element described in the specification is present, and do not exclude the possibility that one or more other features or elements are added in advance.

In the drawings, components may be exaggerated or reduced in size for convenience of explanation. For example, the sizes and thicknesses of the respective components shown in the drawings are arbitrarily shown for convenience of explanation, and the following embodiments are not necessarily drawn to scale.

FIG. 1 is a configuration diagram schematically showing a configuration of a lighting board system according to an embodiment.

1, a lighting board system according to an embodiment includes a board unit 1, a first lighting unit 21, a camera unit 3, a mirror unit 4, and a control unit 5 .

The board unit 1 has a first surface 11 and a second surface 12 opposed to each other and a third surface 13 extending to connect the edges of the first surface 11 and the second surface 12, As shown in FIG. 2, may be formed as a flat flat plate. The first surface 11 is a surface facing the user 7 and the second surface 12 is a surface facing the first surface 11 in the first direction D1. And the third surface 13 may be a corner surface formed at the edges of the first surface 11 and the second surface 12. [

The board unit 1 is preferably arranged to transmit light along at least a first direction D1 so as to face the first side 11 in a direction opposite to the second side 12 The appearance of the user 7 can be observed. The board unit 1 may be formed of a transparent glass material or a transparent plastic material.

The board unit 1 is provided on the first surface 11 so that the user 7 can input the input by the input tool. The input tool may be a pen including a fluorescent pigment.

The first surface 11 may be rougher than the surface of the second surface 12 so that marking by the input tool can be performed more smoothly. In this case, however, it is preferable that the light transmittance of the board unit 1 is not excessively decreased.

The first lighting unit 21 is positioned to face at least a part of the third surface 13 and is provided to irradiate light in a second direction D2 as viewed in Fig. It suffices that the first lighting unit 21 is provided so as to irradiate light on an area corresponding to the input area 14 which is mainly input by the user among the first surface 11.

3, the first lighting unit 21 includes a first light source 211 positioned opposite to the third surface 13 and a second light source 211 positioned between the third surface 13 And a second light guide 212 facing the first light guide 212. The first light guide 212 is coupled to the edge of the third surface 13 and guides the light of the first light source 211 toward the third surface 13 without leaking outward. The first light guide 212 may be coupled along the third surface 13, as shown in FIG. 2, and may be formed to cover the entire third surface 13 according to an embodiment.

When the first lighting unit 21 irradiates light into the board unit 1 through the third surface 13 and the user 7 writes on the first surface 11 by the input tool , The contents input on the first side 11 are illuminated with a fluorescent color, and thus can be viewed more clearly by the viewer. In addition, the contents inputted by the user 7 are confirmed on the second surface 12, so that they are always present on the front surface of the user 7 positioned opposite to the first surface 11, So that the input content is not seen.

The camera unit 3 is located in front of the second direction 12 in the first direction D1 and is positioned so as to be spaced apart from the second plane 12. [ It is preferable that the camera unit 3 is arranged such that the imaging direction 31 thereof is directed in the second direction D2. Thus, the camera unit 3 can photograph the mirror unit 4 positioned in front of the camera unit 3.

It is preferable that the mirror unit 4 is located in front of the imaging direction 31 of the camera unit 3 and at least the reflection surface 41 thereof is inclined in the first direction D1 and the second direction D2 desirable. 1, the reflecting surface 41 of the mirror unit 4 may be at 45 degrees with respect to the first direction D1 and the second direction D2, respectively. 2, the mirror unit 4 may be formed to extend in a third direction D3 perpendicular to the first direction D1 and the second direction D2, May correspond to the length (1) of the board unit (1) in the third direction (D3). Accordingly, when the camera unit 3 photographs the reversed image through the mirror unit 4, it is possible to prevent part of the image from being deviated from the image. However, the present invention is not necessarily limited to this, and the mirror unit 4 may further include a convex lens and / or a concave lens so that the camera unit 3 can capture an image transmitted through the board unit 1 with a small size .

The mirror unit 4 may be coupled to the camera unit 3 using a separate jig 42, as shown in FIG. The jig 42 can be coupled to the mirror unit 4 so that the angle of the mirror unit 4 can be changed.

1, the user 7 can lecture while inputting specific contents to the first side 11 located on the front side of the user 7 while looking at the front side, (3). At this time, the scene of the steel is inverted through the mirror unit 4, and the inverted image is taken through the camera unit 3. Therefore, when the user 7 inputs a specific content including a character on the first side 11, the viewer who sees the video will see the reversed video, so that the character appears originally.

The first image data, which is the inverted image, is transferred to the first image generating unit of the control unit 5 electrically connected to the camera unit 3, and the first image generating unit generates the first image data from the first image data, And provides it to the viewer. Therefore, without any editing process, the viewer can view the video of the lecture being shot in real time.

Meanwhile, the control unit 5 may be electrically connected to an image reproducing unit 6 for reproducing second image data different from the first image data. The image reproducing unit 6 can reproduce the second image data that the user 7 can use as the lecture data, not the image taken by the camera unit 3. [ The second image data may be various video images, internet images, and / or presentation images. The image reproducing unit 6 may be provided in an apparatus separate from the control unit 5, but the present invention is not limited thereto and may be provided in the same apparatus as the control unit 5. [ The image reproducing unit 6 may be a part of a memory or a memory equipped with a video reproducing program.

5, the control unit 5 may include a first image generation unit 51, a second image generation unit 52, and a third image generation unit 53 .

The first image generating unit 51 generates a first image layer 511 as shown in FIG. 6 with the first image data generated from the camera unit 3 as described above, (52) receives second image data from the image reproducing unit (6) and generates a second image layer (521). The third image generation unit 53 generates a third image layer by superimposing the first image layer 511 and the second image layer 521 on each other. The third image layer thus generated is stored in a separate storage unit (not shown) and is provided to the viewer. Accordingly, the third image layer, which is a lecture image viewed by the viewer, is an image in which the first image layer, which is a lecture image, is overlapped with a second image layer, which is a supplementary image of a separate lecture, Therefore, it can be a more stereoscopic image that shows various river effects. For example, when the user 7 who is a lecturer conducts a chemical lecture, the user 7 can enter a lecture on the first side 11 of the board unit 1 for a chemical formula, And a stereoscopic image of the chemical structure is generated as a second image layer, and a third image layer having the two image layers superimposed to the stereoscopic image of the chemical structure can be generated as a final image. The first image generating unit 51, the second image generating unit 52 and the third image generating unit 53 are not necessarily included in a single module, and at least one of the first image generating unit 51, May be present in the device. It may also be part of a memory or memory loaded with each program.

Meanwhile, as shown in FIGS. 7 and 8, according to another embodiment of the lighting board system of the present invention, a second lighting unit 22 may be further provided.

The second lighting unit 22 is arranged to face the first side 11 with the user 7 therebetween and to irradiate the light toward the first part 71 of the user 7.

The second lighting unit 22 includes a second light source 220, a second light guide 221 extending in a third direction D3, and a second light guide 223 disposed obliquely in the third direction D3, And a second -2 light guide 222 extending toward the head portion of the light guide plate 7. The 2-1 light guide 221 and the 2-2 light guide 222 are positioned opposite to each other with the second light source 220 as a center. It is preferable that the 2-1 light guide 221 and the 2-2 light guide 222 extend in the second direction D2 as viewed in FIG. Light emitted from the second light source 220 by the 2-1 light guide 221 and the 2-2 light guide 222 illuminates the back of the shoulder of the user 7, So that the image of the user 7 to be formed can have a stereoscopic appearance.

Meanwhile, in the above embodiment, the third lighting unit 23 positioned to face the second lighting unit 22 with the user 7 therebetween may be further included.

The third lighting unit 23 is provided to irradiate light toward the second portion 72 of the user 7. The second portion 72 may be a portion different from the first portion 71 and may be a front portion including the face of the user 7 and may overlap at least a portion with the first portion 71 .

The third lighting unit 23 includes a third light source 230, a third light guide 231 extending in the third direction D3, and a third light guide 231 disposed obliquely in the third direction D3, And a third -2 light guide 232 extending toward the face portion of the second lens 7. The 3-1 light guide 231 and the 3-2 light guide 232 are positioned opposite to each other with the third light source 230 as a center. It is preferable that the 3-1 light guide 231 and the 3-2 light guide 232 extend in the second direction D2 as seen in FIG. Light emitted from the third light source 230 by the third-first light guide 231 and the third light guide 232 illuminates the front surface including the face of the user 7, So that the image of the user 7 formed on the layer can have a clearer appearance.

In the above embodiment, a pair of fourth lighting units 24 positioned on both sides of the board unit 1 in the second direction D2 may be further included.

The fourth lighting unit 24 is provided to irradiate light toward the third portion 73 of the user 7. The third portion 73 may be a portion different from the first portion 71 and the second portion 72 and may be a shoulder portion on both sides of the user 7, 1 portion 71 and the second portion 72, as shown in FIG.

The fourth lighting unit 24 includes a fourth light source 240 and a fourth light guide 241 and a fourth light guide 242 inclined to have an acute angle with respect to the second direction D2 . The 4-1 light guide 241 and the 4-2 light guide 242 are positioned opposite to each other with respect to the fourth light source 240. The 4-1 light guide 241 is located closer to the board unit 1 than the 4-2 light guide 242 and the 4-1 light guide 241 is located in the second direction D2 The angle formed by the fourth light guide 242 may be smaller than the angle formed by the fourth light guide 242 with the second direction D2. Accordingly, the light emitted from the fourth light source 240 illuminates both sides including both shoulders of the user 7, so that the image of the user 7 formed on the first image layer has a more stereoscopic appearance . The fourth light guide 241 may be substantially parallel to the second direction D2.

The light absorbing unit 25 may be further provided so as to face the first surface 11 with the second lighting unit 22 interposed therebetween.

The light-absorbing unit 25 may be formed of a material capable of absorbing light. As shown in FIGS. 7 and 8, the outer side of the second lighting unit 22 to the fourth lighting unit 24 Can be formed to be inexpensive. The user 7 can be formed more stereoscopically and clearly by the light irradiated from the second lighting unit 22 to the fourth lighting unit 24, When one image layer is overlapped with the second image layer, another unnecessary image is not added, so that the third image layer can be generated effectively without any special editing.

According to another embodiment of the present invention, the lighting board system may further include a voice analysis unit 54, as can be seen in Figs. 1, the speech analysis unit 54 may be included in the control unit 5, but is not limited thereto and may be provided as a separate unit from the control unit 5. [

The speech analysis unit 54 may include a speech data generation unit 541, a speech data selection unit 542, a speech comparison unit 543 and a first adjustment signal generation unit 544, They need not necessarily be included in a single module, but at least one of them may be in a separate separate device. It may also be part of a memory or memory loaded with each program.

The voice data generation unit 541 generates first voice data from the voice of the user. The voice data generation unit 541 may be a voice receiver capable of recording voice of a user, and a voice recorder included in the camera unit 3 may be used. However, the present invention is not limited to this, and a recording apparatus separate from the camera unit 3 may be provided to record the voice of the user and generate the first voice data therefrom (S11).

The audio data selection unit 542 selects the second audio data from the first audio data (S12). If the first voice data is the voice content of the lecture being lectured by the user 7, the second voice data may be a specific word or a sentence of the first voice data.

The voice comparison unit 543 compares the second voice data with the reference voice data (S13). The reference voice data may be a specific word or sentence stored in advance in the voice storage unit (not shown), and may be stored in advance by the user with her voice.

The first adjustment signal generation unit 544 generates a first adjustment signal for playing back the second video data and may be electrically connected to the video playback unit 6. [

The first adjustment signal generation unit 544 generates the first adjustment signal when the second audio data coincides with the reference audio data (S14). The first adjustment signal is transmitted to the image reproduction unit 6, and the second image data is reproduced accordingly. The first adjustment signal may not only reproduce the second image data but also stop the second image data.

If the second voice data does not coincide with the reference voice data, step S12 of selecting the second voice data from the first voice data may be performed. However, the present invention is not limited to this, and the first adjustment signal generation may be terminated.

According to another embodiment of the present invention, the lighting board system may further include an image analysis unit 55, as shown in Figs. 11 and 12. Fig. The image analysis unit 55 may be included in the control unit 5 as shown in FIG. 1, but is not limited thereto and may be provided as a separate unit from the control unit 5.

The image analysis unit 55 may include a video data selection unit 551, an image comparison unit 552 and a second adjustment signal generation unit 553, which must be included in a single module But at least one of them may be present in a separate separate device. It may also be part of a memory or memory loaded with each program.

The image analysis unit 55 first receives an image for the user (S21). The image analysis unit 55 may receive the first image data generated from the camera unit 3 and analyze the first image data. However, the present invention is not limited to this, and the image analysis unit 55 may further include a separate image data generation unit (not shown), thereby recording separate image data different from the first image data Lt; / RTI > This image data generation unit may be a separate camera unit that is not combined with or coupled to the mirror unit.

The image data selection unit 551 selects third image data from the image data on which the image for the user is recorded (S22). The third image data may be an image or an image containing a specific action of the user that can be patterned among the images taken by the user.

The image comparison unit 552 compares the third image data with reference image data (S23). The reference image data may be a picture or an image containing a specific action of a user that can be patterned and stored in advance in an image storage unit (not shown), and may be stored in advance have.

The second adjustment signal generation unit 553 generates a second adjustment signal for adjusting the second video data and may be electrically connected to the video playback unit 6. [

The second adjustment signal generation unit 553 generates the second adjustment signal when the third video data matches the reference video data (S24). This second adjustment signal is transmitted to the image reproduction unit 6 and the second image data is reproduced accordingly. The second adjustment signal not only plays the second video data but also stops the second video data.

If the third image data does not coincide with the reference image data, the third image data may be selected from the image data for the user (S21). However, the present invention is not limited to this, and the second adjustment signal generation may be terminated.

According to the above-described embodiments, the user 7 can automatically adjust the reproduction of the second image data by performing a specific operation by saying a specific word or a sentence during the lecture.

According to another embodiment of the present invention, the lighting board system may further include a data analysis unit 56, as can be seen in Figs. 13 and 14. Fig. 1, the data analysis unit 56 may be included in the control unit 5, but is not limited thereto and may be provided as a separate unit from the control unit 5. [

The data analysis unit 56 may include a data selection unit 561, a data comparison unit 562 and a third adjustment signal generation unit 563, which must be included in a single module , But at least one of them may be in a separate separate device. It may also be part of a memory or memory loaded with each program.

The data analysis unit 55 may receive data about the user and analyze the data. The data for the user may be voice data of the user or image data for the user. The data for such a user may be acquired by the camera unit 3, but is not limited thereto and may be acquired by a separate recording apparatus or recording apparatus.

The data selection unit 561 selects the first data from the data for the user (S31). If the data for the user is voice data, the first data may be a specific word or a sentence. When the data for the user is image data, the image may be an image or an image containing specific actions of the user that can be patterned out of the images taken by the user.

The data comparison unit 562 compares the first data with the first reference data (S32). The first reference data may be reference voice data or reference video data, which is previously stored in an image storage unit (not shown), and may be stored in advance by the user with his voice or action.

The adjustment signal generation unit 563 generates an adjustment signal for adjusting the second video data and may be electrically connected to the video playback unit 6. [

The adjustment signal generation unit 563 generates the adjustment signal when the first data matches the first reference data (S35). This adjustment signal is transmitted to the image reproducing unit 6 and the second image data is reproduced accordingly. The adjustment signal may not only cause the second video data to be reproduced but also stop the second video data.

If the first data does not coincide with the first reference data, step S33 of selecting the second data from the data for the user may be performed again. In this case, the second data may be voice data of a user or video data of a user.

Alternatively, the second data may be the same kind of data as the first data. For example, when the first data is voice data, the second data may be voice data. At this time, the second data may be a specific word or a sentence different from the first data.

Optionally, the second data may be a different kind of data than the first data. For example, when the first data is audio data, the second data may be video data.

The data comparison unit 562 again compares the second data with the second reference data (S34). The second reference data may be reference voice data or reference video data that is stored in advance in an image storage unit (not shown), and may be previously stored in the user's voice or action.

The second reference data may be different from the first reference data.

If the second data is the same kind of data as the first data, the second reference data may be a specific word or a sentence different from the first reference data.

If the second data is data of a different type from the first data, the second reference data may be a reference data of a different type from the first reference data.

The adjustment signal generating unit 563 generates the adjustment signal when the second data matches the second reference data (S35). This adjustment signal is transmitted to the image reproducing unit 6 and the second image data is reproduced accordingly. The adjustment signal may not only cause the second video data to be reproduced but also stop the second video data.

If the second data does not coincide with the second reference data, step S31 of selecting the first data from the data for the user again may be performed. However, the present invention is not limited to this, and the selection of the second data from the data for the user (S33) may be performed or the adjustment signal generation may be terminated.

According to the embodiment as described above, when the user 7 selectively speaks a specific word or sentence during a lecture to perform a specific operation to automatically regenerate the second image data, the probability of such adjustment failing being reduced So that as the user 7 proceeds the lecture, the second image data can be adjusted automatically and without failure by his or her will.

The present invention has been described above with reference to preferred embodiments. It will be understood by those skilled in the art that the present invention may be embodied in various other forms without departing from the spirit or essential characteristics thereof. Therefore, the above-described embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is indicated by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (10)

A first surface opposed to the user, a second surface opposed to the first surface in a first direction, and a second surface opposite to the first surface and the second surface, A board unit including a third surface extended to connect the first surface and the second surface, the board unit being configured to transmit light along a first direction extending the first surface and the second surface;
A first lighting unit positioned to face at least a part of the third surface and irradiating light in a second direction;
A camera unit positioned to face the second surface and positioned so as to form an imaging direction in a second direction perpendicular to the first direction;
A mirror unit disposed in front of the imaging direction of the camera unit and inclined in the first direction and the second direction; And
And a first image generation unit electrically connected to the camera unit and configured to generate a first image layer from first image data generated from the camera unit,
Wherein the first and second surfaces of the board unit are transparent,
Wherein the camera unit is located opposite to the user with the board unit interposed therebetween, captures an image of the user reflected by the mirror unit through the board unit, The lighting board system being included in the data.
The method according to claim 1,
An image reproducing unit for reproducing second image data different from the first image data;
A second image generation unit configured to generate a second image layer from second image data transmitted from the image reproduction unit; And
And a third image generation unit for generating a third image layer by combining the first image layer and the second image layer so as to overlap with each other.
The method according to claim 1,
Further comprising a second lighting unit positioned to face the first surface across a user and to illuminate light toward a first portion of the user.
The method of claim 3,
Further comprising a third lighting unit positioned to face the second lighting unit across the user and to illuminate the second portion of the user with the first portion.
The method of claim 3,
And a fourth lighting unit located on both sides of the board unit in the second direction and configured to irradiate light toward the third portion of the user other than the first portion and the second portion.
The method of claim 3,
And a light absorber unit positioned to face the first surface with the second lighting unit therebetween and configured to absorb light.
3. The method of claim 2,
And a voice analysis unit for analyzing the voice of the user to adjust the second image data.
3. The method of claim 2,
Further comprising: a voice analysis unit for analyzing the voice of the user to adjust the second image data,
Wherein the speech analysis unit comprises:
An audio data generation unit that generates first audio data from the user's voice;
An audio data selection unit that selects second audio data from the first audio data;
A voice comparison unit for comparing the second voice data with reference voice data; And
And a first adjustment signal generation unit that is electrically connected to the image reproduction unit and generates a first adjustment signal for adjusting the second image data when the second audio data matches the reference audio data, Board system.
3. The method of claim 2,
Further comprising an image analyzing unit for analyzing an image for a user to adjust the second image data.
3. The method of claim 2,
Further comprising an image analysis unit for analyzing an image for a user to adjust the second image data,
Wherein the image analysis unit comprises:
A video data selection unit for selecting third video data from the video for the user,
An image comparison unit for comparing the third image data with reference image data; And
And a second adjustment signal generation unit that is electrically connected to the image reproduction unit and generates a second adjustment signal for adjusting the second image data when the third image data matches the reference image data, Board system.
KR1020150083671A 2015-06-12 2015-06-12 Lighting board system KR101663240B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150083671A KR101663240B1 (en) 2015-06-12 2015-06-12 Lighting board system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150083671A KR101663240B1 (en) 2015-06-12 2015-06-12 Lighting board system

Publications (1)

Publication Number Publication Date
KR101663240B1 true KR101663240B1 (en) 2016-10-14

Family

ID=57157042

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150083671A KR101663240B1 (en) 2015-06-12 2015-06-12 Lighting board system

Country Status (1)

Country Link
KR (1) KR101663240B1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010224015A (en) * 2009-03-19 2010-10-07 Sanyo Electric Co Ltd Projection video display device, writing board, and projection video system
KR20100112776A (en) * 2009-04-10 2010-10-20 (주)넥손 Lighting board
US20120154511A1 (en) * 2010-12-20 2012-06-21 Shi-Ping Hsu Systems and methods for providing geographically distributed creative design
KR20130133664A (en) * 2012-05-21 2013-12-09 삼성전자주식회사 Method, apparatus and system for interactive learning management and educational matters

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010224015A (en) * 2009-03-19 2010-10-07 Sanyo Electric Co Ltd Projection video display device, writing board, and projection video system
KR20100112776A (en) * 2009-04-10 2010-10-20 (주)넥손 Lighting board
US20120154511A1 (en) * 2010-12-20 2012-06-21 Shi-Ping Hsu Systems and methods for providing geographically distributed creative design
KR20130133664A (en) * 2012-05-21 2013-12-09 삼성전자주식회사 Method, apparatus and system for interactive learning management and educational matters

Similar Documents

Publication Publication Date Title
JP2005218103A (en) Device for displaying facial feature
US20130113891A1 (en) Parallax scanning methods for stereoscopic three-dimensional imaging
US6082865A (en) Projection type display device with image pickup function and communication system
KR101554574B1 (en) System and Method for Recording Lecture Video Using Transparent Board
Gleicher et al. A framework for virtual videography
KR101663240B1 (en) Lighting board system
JP4399125B2 (en) Image display device and image display method
US11462122B2 (en) Illustration instructor
KR101593136B1 (en) Led glass board based video lecture system
US20220360755A1 (en) Interactive display with integrated camera for capturing audio and visual information
CN112689994B (en) Demonstration system and demonstration method
JP5451864B1 (en) Presentation device
US20220101743A1 (en) Studio Arrangement and Playing Devices Whereby Online Students Worldwide Can Learn Mathematics from an Outstanding Teacher by Watching Videos Showing the Teachers Face Body Language and Clearly Legible Writing
JP2005175644A (en) Video image synthesizer, video image driver for driving video image synthesizer, transparent panel projecting video image of video image synthesizer, and three-dimensional video image system having video image synthesizer, video image driver and transparent panel
CN219999515U (en) Intelligent holographic projection sound box
KR102437155B1 (en) Realistic Real-Time Learning System Using Hologram Display Device And Method Thereof
Thomson et al. Using Video and Blended Learning
Teneqexhi et al. Making virtual classrooms of Google platform more real using transparent interactive Screen-board (tiSb-Albania)
JP5023362B2 (en) 3D image playback device
Moody et al. Lighting for Televised Live Events: Making Your Live Production Look Great for the Eye and the Camera
KR20100010610A (en) Device for providing moving picture
Rakov Unfolding the Assemblage: Towards an Archaeology of 3D Systems
Zavagno “Shadows of Reality” and “Documentary Tools”: An MA Research-Creation Thesis Project on Facing Ethical Dilemmas in Documentary Filmmaking
Baur Exploring Cinematic VR: An Analysis of the Tools, Processes, and Storytelling Techniques of Virtual Reality Filmmaking
Ouglov et al. Panoramic video in video-mediated education

Legal Events

Date Code Title Description
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20190808

Year of fee payment: 4