KR101741550B1 - Method and apparatus for providing optimized viewing conditions in multimedia device - Google Patents

Method and apparatus for providing optimized viewing conditions in multimedia device Download PDF

Info

Publication number
KR101741550B1
KR101741550B1 KR1020100112530A KR20100112530A KR101741550B1 KR 101741550 B1 KR101741550 B1 KR 101741550B1 KR 1020100112530 A KR1020100112530 A KR 1020100112530A KR 20100112530 A KR20100112530 A KR 20100112530A KR 101741550 B1 KR101741550 B1 KR 101741550B1
Authority
KR
South Korea
Prior art keywords
user
multimedia device
information
image
unit
Prior art date
Application number
KR1020100112530A
Other languages
Korean (ko)
Other versions
KR20120051210A (en
Inventor
강민구
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020100112530A priority Critical patent/KR101741550B1/en
Publication of KR20120051210A publication Critical patent/KR20120051210A/en
Application granted granted Critical
Publication of KR101741550B1 publication Critical patent/KR101741550B1/en

Links

Images

Abstract

The present invention relates to an operation method and a multimedia device capable of recognizing a user in a multimedia device and providing an audio / video output environment suitable for the recognized user. That is, in the multimedia device, the user who is using the multimedia device is recognized, the user information of the recognized user is searched, the user's favorite genre is searched based on the searched user information, and based on the searched favorite genre, By setting the beam-forming setting and the audio / video output environment, the multimedia device can recognize the user and set an optimal viewing environment, And to provide a multimedia device capable of receiving a sound, and a multimedia device therefor.

Description

TECHNICAL FIELD [0001] The present invention relates to a method and apparatus for providing an optimal viewing environment using beamforming in a multimedia device,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a multimedia device and an operation method thereof, and more particularly, to an operation method and a multimedia device that can improve the usability of a user.

In particular, the present invention relates to an operation method and a multimedia device capable of recognizing a user in a multimedia device and providing an audio / video output environment suitable for the recognized user.

The multimedia device is, for example, a device having a function of receiving and processing a broadcast image that a user can view. The multimedia device displays, for example, a broadcast selected by a user among broadcast signals transmitted from the broadcast station on the display. Currently, broadcasting is changing from analog broadcasting to digital broadcasting around the world.

Digital broadcasting refers to broadcasting in which digital video and audio signals are transmitted. Compared to analog broadcasting, digital broadcasting is strong against external noise and has a small data loss, is advantageous for error correction, has a high resolution, and provides a clear screen. Also, unlike analog broadcasting, digital broadcasting is capable of bidirectional service.

In addition, according to the use of the digital broadcasting, today's multimedia devices are able to use more various contents and services than existing multimedia devices such as game contents, movie contents, music contents, and the like. In order to use the contents, The sound and video output of the camera can be adjusted in more detail.

However, according to the related art, as the elements adjustable in the multimedia device are diversified, it is inconvenient for the user to determine the degree of adjustment of the elements. When a plurality of users use one multimedia device, There has been a problem in that, in order to provide a suitable viewing environment, the setting of the elements must be reset each time the user of the multimedia device is changed.

SUMMARY OF THE INVENTION Accordingly, the present invention has been made in view of the above problems, and it is an object of the present invention to provide a multimedia device capable of recognizing a user currently using the multimedia device and performing an operation for providing a viewing environment suitable for the recognized user, And a multimedia device according to the method are required to be developed.

It is an object of the present invention to provide a multimedia device and an operation method thereof that can improve the usability of a user.

It is another object of the present invention to provide a multimedia device and an operation method thereof that can provide a suitable viewing environment according to a user.

Another object of the present invention is to provide a multimedia device and a method of operating the same that enable sound setting and image setting based on a suitable user when a plurality of users are using the multimedia device.

According to another aspect of the present invention, there is provided a method of providing an optimal viewing environment for a multimedia device, comprising: recognizing a user using the multimedia device; Retrieving user information of the recognized user; Searching for a user's favorite genre based on the searched user information; And setting acoustic beamforming (Beam-forming) based on the retrieved preferred genre.

According to another aspect of the present invention, there is provided a method of providing an optimal viewing environment for a multimedia device, comprising: recognizing a user using the multimedia device; Retrieving user information of the recognized user when a plurality of users are recognized; Searching for a favorite genre of each user based on the searched user information; And configuring acoustic beamforming based on a preference genre of a priority user among the users.

According to another aspect of the present invention, there is provided a multimedia device including: an image sensing unit for acquiring user image information; A recognition unit for recognizing a user based on the data acquired by the image sensing unit; A storage unit for storing user information for each user; An acoustic output unit adapted to adjust sound by applying beam-forming; And searching the user information of the user recognized through the recognition unit through the storage unit, searching for a user preference genre based on the searched user information, and applying sound beamforming of the sound output unit based on the searched favorite genre, And a control unit for controlling the control unit.

According to the embodiment of the present invention, even if the user does not directly adjust the viewing environment, since the multimedia device recognizes the user and sets an optimal viewing environment, the user can select the optimum screen and sound .

According to another embodiment of the present invention, even when a plurality of users simultaneously use one multimedia device, an operation for searching for a priority user and setting an optimal viewing environment is performed, thereby enhancing user convenience .

Meanwhile, according to another embodiment of the present invention, various user interfaces can be provided in the multimedia device, and the multimedia device can be operated conveniently.

1 is a diagram schematically illustrating an example of an overall system including a multimedia device according to an embodiment of the present invention.
FIG. 2 is a diagram showing an example of the multimedia device shown in FIG. 1 in more detail.
FIG. 3 is a view showing a multimedia device using a plurality of heterogeneous image sensors according to an embodiment of the present invention and a photographing screen at the same time.
4 is a diagram illustrating a process of using detection data and recognition data in a plurality of heterogeneous image sensors and multimedia devices according to an embodiment of the present invention.
5 is a diagram for explaining a face vector stored in the database shown in FIG.
6 is a view for explaining an operation of a plurality of heterogeneous image sensors connected to a multimedia device into a hardware area and a software area according to an embodiment of the present invention.
7 is a view showing a plurality of heterogeneous image sensors and a multimedia device according to an embodiment of the present invention, respectively.
8 is a view showing a plurality of heterogeneous image sensors and a multimedia device according to another embodiment of the present invention, respectively.
FIG. 9 is a diagram illustrating a plurality of heterogeneous image sensors according to an exemplary embodiment of the present invention in more detail.
10 is a view showing an example of a first image sensor among a plurality of heterogeneous image sensors according to an embodiment of the present invention.
11 is a view showing another example of the first image sensor among the plurality of heterogeneous image sensors according to the embodiment of the present invention.
12 is a diagram for explaining a method of calculating a distance using the first image sensor shown in FIG.
FIG. 13 is a view showing an example of the multimedia device shown in FIGS. 1 and 2 in more detail.
14 illustrates a user and multimedia device in accordance with an embodiment of the present invention.
15 is a diagram illustrating an image of a depth image sensor that recognizes coordinates of each part of a user's body according to an embodiment of the present invention.
16 is a flowchart illustrating a process of adjusting a viewing environment in a multimedia device according to an embodiment of the present invention.
17 is a flowchart illustrating a process of adjusting a viewing environment when there are a plurality of viewers of a multimedia device according to an exemplary embodiment of the present invention.
18 is a flowchart illustrating in detail the process of setting beamforming according to the preferential user of FIG.
19 is a view illustrating a display screen including a user recognition message according to an embodiment of the present invention.
20 is a view illustrating a display screen including a user setting application message according to an embodiment of the present invention.
FIG. 21 is a diagram illustrating a display screen including a confirmation menu for applying a user setting according to an exemplary embodiment of the present invention. Referring to FIG.
22 is a view illustrating a display screen including a user selection menu according to an embodiment of the present invention.
23 is a view illustrating a display screen including a personalized initial screen according to an embodiment of the present invention.

Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings. Furthermore, the suffix "module" and " part "for components used in the following description are given merely for ease of description, and the" module " It can be designed in hardware or software.

Meanwhile, the multimedia device described herein corresponds to various types of devices that receive and process broadcast data, for example. Further, the multimedia device may correspond to Connected TV. In addition to the broadcast receiving function, the Connected TV is further provided with a wired / wireless communication device and the like, and is more convenient to use than a hand-held input device, a touch screen or a motion- Lt; / RTI > Also, it can be connected to the Internet and a computer by the support of a wired or wireless Internet function, and can perform functions such as e-mail, web browsing, banking or game. A standardized general-purpose OS may be used for these various functions.

Accordingly, the Connected TV can perform various user-friendly functions because various applications can be freely added or deleted on a general-purpose OS kernel, for example. More specifically, the Connected TV may be, for example, a web TV, an Internet TV, an HBBTV, a smart TV, a DTV, or the like, and may be applied to a smartphone according to circumstances.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG.

As used herein, terms used in the present invention are selected from general terms that are widely used in the present invention while taking into account the functions of the present invention, but these may vary depending on the intention or custom of a person skilled in the art or the emergence of new technologies. Also, in certain cases, there may be a term chosen arbitrarily by the applicant, in which case the meaning thereof will be described in the description of the corresponding invention. Therefore, it is intended that the terminology used herein should be interpreted based on the meaning of the term rather than on the name of the term, and on the entire contents of the specification.

1 is a diagram schematically illustrating an example of an overall broadcasting system including a multimedia device according to an embodiment of the present invention. The multimedia device of FIG. 1 may correspond to, for example, Connected TV, but the scope of the present invention is not limited to Connected TV, and the scope of rights of the present invention should be determined by the claims in principle.

1, an overall system including a multimedia device according to an embodiment of the present invention includes a content provider (CP) 10, a service provider (SP) 20, a network provider A network provider (NP) 30 and an HNED 40. The HNED 40 corresponds to, for example, a client 100 that is a multimedia device according to an embodiment of the present invention.

The content provider 10 produces and provides various contents. 1, a terrestrial broadcaster, a cable SO (System Operator), a MSO (Multiple System Operator), a satellite broadcaster, An Internet broadcaster, and the like may be exemplified. [0040] In addition, the content provider 10 may provide various applications in addition to the broadcast content.

The service provider 20 can provide the contents provided by the contents provider 10 in a service package. For example, the service provider 20 of FIG. 1 can package and provide the first terrestrial broadcast, second terrestrial broadcast, cable MSO, satellite broadcast, various Internet broadcasts, applications, etc. to the user.

The network provider 30 may provide a network for providing services to the client 100. The client 100 may be provided with a service by building a home network end user (HNED).

On the other hand, the client 100 can also provide the content through the network. In this case, conversely, conversely, conversely, the client 100 may be a content provider, and the content provider 10 may receive content from the client 100. [ In this case, there is an advantage that an interactive content service or a data service is possible.

FIG. 2 is a diagram showing an example of the multimedia device shown in FIG. 1 in more detail.

The multimedia device 200 includes a network interface 201, a TCP / IP manager 202, a service delivery manager 203, A demultiplexer 205, a PSIP and / or SI decoder 204, an audio decoder 206, a video decoder 207, a display unit A / and OSD Module 208, a Service Control Manager 209, a Service Discovery Manager 210, a Metadata Manager 212, an SI & Metadata DB 211, A manager 214, and a service manager 213. Further, a plurality of heterogeneous image sensors 260 are connected to the multimedia device 200, and are connected by, for example, a USB-type connection. 2, the plurality of heterogeneous image sensors 260 are designed as separate modules. However, the plurality of heterogeneous image sensors 260 may be designed to be housed in the multimedia device 200 It is possible.

The network interface unit 201 receives packets received from the network and transmits packets to the network. That is, the network interface unit 201 receives services, contents, and the like from the service provider through the network.

The TCP / IP manager 202 is involved in delivering packets from the multimedia device 200 to the multimedia device 200, that is, from the source to the destination. The service delivery manager 203 is responsible for controlling the received service data. For example, RTP / RTCP can be used to control real-time streaming data. When the real-time streaming data is transmitted using the RTP, the service delivery manager 203 parses the received data packet according to RTP and transmits it to the demultiplexer 205 or to the service manager 213 And stores it in the SI & Metadata DB 711. And feedbacks the network reception information to the server providing the service using the RTCP.

The demultiplexer 205 demultiplexes the received packets into audio, video, program specific information (PSI) data, and transmits them to the audio / video decoders 206 and 207 and the PSI & PSIP and / do.

The PSI & PSIP and / or SI decoder 204 receives and decodes a demultiplexed PSI section, a Program and Service Information Protocol (PSIP) section, or a SI (Service Information) section in the demultiplexer 205.

In addition, the PSI & PSIP and / or SI decoder 204 decodes the received sections to create a database of service information, and stores the database related to the service information in the SI & Metadata DB 211.

The audio / video decoder 206/207 decodes the video data and audio data received from the demultiplexer 205. [

The UI manager 214 provides a GUI (Graphic User Interface) for the user using an OSD (On Screen Display) or the like, and receives a key input from the user to perform a receiver operation according to the input. For example, upon receipt of a key input from the user regarding channel selection, the key input signal is transmitted to the service manager 213.

The service manager 213 controls the manager associated with the service, such as the service delivery manager 203, the service discovery manager 210, the service control manager 209, and the metadata manager 212.

The service manager 213 also creates a channel map and selects a channel using the channel map according to the key input received from the user interface manager 214. The service discovery manager 210 provides information necessary for selecting a service provider that provides the service. Upon receiving a signal regarding channel selection from the service manager 213, the service discovery manager 210 searches for the service using the information.

The service control manager 209 is responsible for selection and control of services. For example, when a user selects a service such as IGMP or RTSP when selecting a Live Broadcasting service such as an existing broadcasting method and selects a service such as VOD (Video On Demand), RTSP is used to select a service, control . The metadata manager 212 manages the metadata associated with the service and stores the metadata in the SI & Metadata DB 211.

The SI & Metadata DB 211 selects the service information decoded by the PSI & PSIP and / or SI Decoder 204, the metadata managed by the metadata manager 212, and the service provider provided by the service discovery manager 210 Store the necessary information. In addition, the SI & Metadata DB 211 may store setup data and the like for the system.

Meanwhile, the IG 250 is a gateway that collects functions necessary for accessing an IMS-based IPTV service.

The plurality of heterogeneous image sensors 260 shown in FIG. 2 are designed to capture a single image or a plurality of images for a person or object located in the vicinity of the multimedia device 200. More specifically, for example, the plurality of heterogeneous image sensors 260 are designed to operate on a single image or a plurality of images sequentially, periodically, at a selected time, or at a particular condition. A detailed description thereof will be described below.

FIG. 3 is a view showing a multimedia device using a plurality of heterogeneous image sensors according to an embodiment of the present invention and a photographing screen at the same time. Hereinafter, referring to FIG. 3, a multimedia device and a photographing screen using a plurality of heterogeneous image sensors according to an embodiment of the present invention will be described.

In general, the first image sensors associated with depth data processing are not suitable for long-distance facial recognition due to limited resolution (e.g., maximum VGA grade) and recognition distances (e.g., 3.5 m) . In addition, the second image sensors related to color data processing have a disadvantage that they are slow in recognition and are not robust to light conditions. Therefore, in order to compensate for the disadvantages of each image sensor, the multimedia device according to an embodiment of the present invention is designed to be interlocked with a hybrid image sensor module in which a first image sensor and a second image sensor are combined.

As the above-mentioned first image sensor, for example, an IR camera or a depth camera is used. More specifically, for example, TOF (Time Of Flight) method and structured light method are discussed with the IR camera or depth camera. In the TOF method, distance information is calculated using a time difference that is radiated by infrared rays. The structured light method radiates infrared rays in a specific pattern and calculates a distance by analyzing the deformed pattern. However, in the case of the first image sensor, there is an advantage in terms of depth data recognition and processing speed, and it is possible to easily sense objects and people even in a dark place. However, it has a disadvantage that the resolution is low at a long distance.

Furthermore, as the second image sensor described above, for example, a color camera or an RGB camera is used. More specifically, for example, the color camera or the RGB camera, the stereo camera method and the mono camera method are discussed. The stereo camera method detects and tracks a hand or a face on the basis of respective image parallax comparison information photographed through two cameras. The mono camera method detects and tracks a hand or a face based on shape and color information shot through a single camera. However, in the case of the second image sensor, the resolution is improved as compared with the first image sensor. However, the second image sensor is vulnerable to ambient light and is difficult to recognize in the dark. Particularly, there is a problem that accurate depth recognition is difficult.

In order to solve such conventional problems, as shown in FIG. 3, a multimedia device according to an embodiment of the present invention is designed to include both a first image sensor and a second image sensor. However, the image sensors may be designed to be embedded in the multimedia device or may be designed as a separate hardware module. First, as shown in the area (b) of FIG. 3, the first image sensor captures an image including users located in the vicinity of the multimedia device. The specific photographed images are sequentially shown in (1), (2), (3), and (4) in FIG.

On the other hand, when the photographing and data analysis of the first image sensor is completed, the second image sensor photographs an image of a specific user's face, as shown in the area (a) of FIG. The specific photographed images are sequentially shown in (5), (6), and (7) in FIG.

The first image sensor among the plurality of heterogeneous image sensors according to an embodiment of the present invention captures a first image located in the periphery of the multimedia device and extracts depth data from the captured first image. As shown in (1) of FIG. 3, it is possible to design the area of each object to be displayed in different shades according to the distance.

Further, the first image sensor can recognize and recognize at least one face of the user using the extracted depth data. That is, by using a pre-stored database or the like, Extracts the user's body information (e.g., a face, a hand, a foot, a joint, etc.) as shown in FIG. 3B, and further calculates position coordinates and distance information . More specifically, it is designed to calculate x, y, and z values, which are positional information of the face of the user, where x is the position of the face on the abscissa in the captured first image, The position of the face with respect to the vertical axis in the first image, and z denotes the distance between the face of the user and the first image sensor.

In addition, a second image sensor for extracting a color image among a plurality of heterogeneous image sensors according to an embodiment of the present invention photographs a second image of the recognized face of the user, Respectively.

On the other hand, when the second image sensor and the first image sensor shown in FIG. 3 are designed to be adjacent to each other, the error due to the physical position difference may be negligible. However, according to another embodiment of the present invention, the coordinate information or the distance information acquired by the first image sensor is corrected using the information on the physical position difference, It is designed to capture users by using information or distance information. In addition, if the first image sensor and the second image sensor are designed to be horizontal from the ground, information on the phisycal position difference may be set on the basis of a horizontal frame. The feature information is extracted from the photographed second image as shown in (3) of FIG. The feature information is, for example, data corresponding to a specific region (e.g., mouth, nose, eye, etc.) for identifying a plurality of users using the multimedia device. Furthermore, the second image sensor may zoom in on the area of the user's face based on coordinate values (x, y, z values) obtained through photographing of the first image sensor. This means a process of switching from (5) to (6) in FIG.

When the photographing and analysis of the first image sensor and the second image sensor are completed, the multimedia device according to an embodiment of the present invention accesses a memory storing data corresponding to the extracted feature information, And extracts information identifying a particular user stored in the memory.

If information identifying the specific user exists in the memory, the multimedia device provides a predetermined service for the specific user.

On the other hand, if the information identifying the particular user is not present in the memory, the multimedia device is designed to display a guide message for storing the recognized user in the memory.

As described above, according to the embodiment of the present invention, the first image sensor detects the user position information or the coordinate information of the face, and the second image sensor uses the data obtained from the first image sensor, .

Furthermore, according to another embodiment of the present invention, the second image sensor is designed to operate only in a specific condition, not unconditionally using the second image sensor. For example, if the distance information to the user obtained by the operation of the first image sensor is less than or equal to the first reference value, or if the recognition rate of the user's face acquired through the operation of the first image sensor is equal to or greater than the second reference value, Detects and recoginizes a user face located in the vicinity of the multimedia device with only the first image sensor. On the other hand, if the distance information to the user obtained through the operation of the first image sensor exceeds the first reference value, or if the recognition rate of the user's face acquired by the operation of the first image sensor is less than the second reference value, And is further designed to recognize the user's face using the second image sensor.

According to another embodiment of the present invention, in the process of recognizing the user's face, the second image sensor zooms in using the distance information acquired through the first image sensor, And is designed to capture only the face portion using the face coordinate information acquired through the sensor.

Therefore, when a plurality of different image sensors of different types are used, it is possible to realize a long-distance facial recognition, and the data processing speed is further improved than before.

4 is a diagram illustrating a process of using detection data and recognition data in a plurality of heterogeneous image sensors and multimedia devices according to an embodiment of the present invention.

Face detection and face recognition are different processes. The face detection includes a process of detecting a face region in one image. On the other hand, the face recognition is a process of recognizing whether a detected face corresponds to a specific user. Particularly, the process of executing the face detection process using the first image sensor and executing the face recognition process using the second image sensor according to an embodiment of the present invention will be described with reference to FIG.

4, a multimedia device according to an exemplary embodiment of the present invention includes a detection module 301, a recognition module 302, a database 303, a first image sensor 304, a second image sensor 305 And the like, and uses the detection data 306 and the recognition data 307 as needed. The detection data 306 may include, for example, information-based detection techniques, feature-based detection techniques, template matching techniques, may be generated based on appearance-based detection techniques. In addition, the recognition data 307 includes data such as eyes, nose, mouth, jaw, area, distance, shape, angle and the like for identifying a specific user.

Further, the detection module 301 determines the presence of the user's face by using the image data received from the first image sensor 304. In the process of estimating the area where the user's face is located, the above-described information-based detection techniques, feature-based detection techniques, matching techniques, and appearance-based detection techniques.

The recognition module 302 identifies whether the user is a specific user by using the image data received from the second image sensor 305. [ At this time, the received image data and the face vector information stored in the DB 303 are compared based on the recognition data 307 described above. This will be described in more detail with reference to FIG.

5 is a diagram for explaining a face vector stored in the database shown in FIG.

As shown in FIG. 5, face vectors for each user using a multimedia device according to an exemplary embodiment of the present invention are stored. The face vector is, for example, a data set for feature information appearing on faces of users, and is used for identifying each specific user.

6 is a view for explaining an operation of a plurality of heterogeneous image sensors connected to a multimedia device into a hardware area and a software area according to an embodiment of the present invention.

6, a configuration in which a multimedia device receives an image through a plurality of heterogeneous image sensors and performs an operation includes a hardware area 360 of the image sensor and a multimedia device for processing data received from the image sensor And a software area 350 will be described.

In FIG. 6, the hardware area 360 is shown as a separate module. However, the hardware area 360 may be integrated into a multimedia device that processes the software area 350.

First, the hardware area may include a data collection area 340 and a firmware area 330. [

The data collection area 340 receives the original data to be recognized by the multimedia device through the image sensor. The data collection area 340 includes an IR light projector, a depth image sensor, a color image sensor sensor, a microphone, and a camera chip.

In addition, the firmware area 330 is an area constituting a connection between the hardware area and the software area by being present in the hardware area and operating. In addition, it can be configured as a host application required by a specific application, and can perform downsampling, mirroring, and the like.

Accordingly, the data collection area 340 and the firmware area 330 operate in cooperation with each other, through which the hardware area 360 can be controlled. Also, the firmware area may be driven in the camera chip.

The software area 350 may also include an application programming interface (API) area 320 and a middleware area 310.

API region 320 may be implemented in the control of the multimedia device. Also, when the camera unit is configured as a separate external device from the multimedia device, the API region may be executed in a personal computer, a game console, a set-top box, or the like.

Also, the API area 320 may be a simple API that allows the multimedia device to drive the sensor in the hardware area.

The middleware area 310 may include a depth processing middleware as a recognition algorithm area. In addition, the middleware can provide an application with a clear user control API even when a user inputs a gesture through a hand or a gesture through a whole body area. In addition, the middleware region may include at least one of a middleware region, a middleware region, and a middleware region. The middleware region may include at least one of middleware, Algorithm. In addition, the algorithm can be operated using depth information, color information, infrared information, and voice information obtained in the hardware domain.

7 is a view showing a plurality of heterogeneous image sensors and a multimedia device according to an embodiment of the present invention, respectively. Hereinafter, a plurality of heterogeneous image sensors and multimedia devices according to an embodiment of the present invention will be described with reference to FIG. In FIG. 7, a plurality of heterogeneous image sensors and a multimedia device according to an exemplary embodiment of the present invention are shown separately. However, the multiple cameras may be embedded in the multimedia device.

7, the multimedia device 400 according to one embodiment of the present invention is designed as a module of a CPU (Central Processing Unit) 401 and a GPU (Graphic Processing Unit) 404, and the CPU 401 Includes an application 402 and a face recognition processing module 403. The plurality of heterogeneous image sensors 420 according to an exemplary embodiment of the present invention includes an application specific integrated circuit (ASIC) 421, an emitter 422, a first image sensor 423, 424). The multimedia device 400 and the plurality of heterogeneous image sensors 420 are connected to the wired or wireless interface 410 and may use a universal serial bus (USB) interface, for example. However, the modules of FIG. 7 are merely one embodiment, and the scope of rights of the present invention should be determined by the claims in principle.

The emitter 422 emits light to at least one or more users located around the multimedia device 400. Further, the first image sensor 423 may capture a first image using the divergent light, extract depth data from the captured first image, and use the extracted depth data to convert the at least one The user's face is detected. Also, the second image sensor 424 captures a second image of the detected face of the user, and extracts feature information from the captured second image.

Then, the extracted feature information is transmitted to the face recognition processing module 403 of the multimedia device through the interface 410. Although not shown in FIG. 7, the face recognition processing module 403 is designed to include, for example, a receiver, a memory, an extractor, a controller, and the like.

The receiving unit of the face recognition processing module 403 receives the feature information transmitted through the plurality of heterogeneous image sensors 420 and the interface 410. Further, the memory of the face recognition processing module 403 stores feature information and corresponding IDs for at least one user.

Accordingly, the extracting unit of the face recognition processing module 403 extracts an ID corresponding to the received feature information from the memory, and the control unit of the face recognition processing module 403 extracts predetermined functions corresponding to the ID It is designed to perform automatically.

On the other hand, when the face recognition processing module is designed to be performed by the CPU of the multimedia device as shown in FIG. 7, there is an advantageous effect in terms of expanding the features such as various facial recognition and various functions.

8 is a view showing a plurality of heterogeneous image sensors and a multimedia device according to another embodiment of the present invention, respectively. Hereinafter, a plurality of heterogeneous image sensors and multimedia devices according to another embodiment of the present invention will be described with reference to FIG. In FIG. 8, a plurality of heterogeneous image sensors and a multimedia device according to an exemplary embodiment of the present invention are shown separately. However, the multiple cameras may be embedded in the multimedia device.

8, the multimedia device 500 according to an exemplary embodiment of the present invention is designed as a module of a CPU (Central Processing Unit) 501 and a GPU (Graphic Processing Unit) 503, and the CPU 501 Includes an application 502. A plurality of heterogeneous image sensors 520 according to an embodiment of the present invention includes a face recognition processing module 521, an application specific integrated circuit (ASIC) 522, an emitter 523, a first image sensor 524 ), And a second image sensor 525 module. The multimedia device 500 and the plurality of heterogeneous image sensors 520 are connected to each other through a wired or wireless interface 510. For example, a universal serial bus (USB) interface may be used. However, the modules of FIG. 8 are merely one embodiment, and the scope of rights of the present invention should be determined by the claims in principle.

8 is different from FIG. 7 in that the face recognition processing module 521 is mounted on the plurality of heterogeneous image sensors 520, and the remaining description is omitted.

Meanwhile, when the face recognition processing module is designed to be performed by a plurality of heterogeneous image sensors 520 as shown in FIG. 8, it is possible to design more various types of cameras through independent platforms.

FIG. 9 is a diagram illustrating a plurality of heterogeneous image sensors according to an exemplary embodiment of the present invention in more detail. Hereinafter, referring to FIG. 9, a plurality of heterogeneous image sensors according to an embodiment of the present invention will be described in detail.

9, a plurality of heterogeneous image sensors according to an exemplary embodiment of the present invention includes a first image sensor group 610 and a second image sensor 620, a controller 630, a memory 640, An interface 650 and the like and is designed to receive audio data from the microphone 670 and the external audio source 660 under the control of the controller 630. [

According to an embodiment, the first image sensor may be a depth image sensor.

The depth image sensor is an image sensor in which a pixel value recognized in an image photographed through the depth image sensor is a distance from the depth image sensor.

The first image sensor group 610 may include an emitter 680 and a first image sensor 690 and the emitter may be designed as an Infra-Red (IR) emitter, for example Do.

 In order to acquire an image through the first image sensor group 610, an infrared ray is emitted from an emitter 680, and from the phase difference between the emitted infrared ray and infrared ray reflected from the object, A time of flight (TOF) method of acquiring distance information between image sensors and an infrared ray pattern (many infrared points) are emitted from an emitter 680, There is a structured light method of acquiring distance information between a subject and a depth image sensor based on a pattern in which the patterns are distorted by the image sensor 690.

That is, the multimedia device can grasp the distance information of the subject through the depth image sensor. Particularly, when the subject is a person, skeleton information and coordinate information of each part of the body can be obtained, and information on the concrete operation of the body can be obtained by searching for each partial movement of the body.

Further, under the control of the controller 630, the light projector 682 of the emitter 680 projects the lens 681 to emit light to at least one or more users located in the vicinity of the multimedia device.

In addition, under the control of the controller 630, the first image sensor 690 captures a first image using light received through the lens 691, and extracts depth data from the captured first image And transmits it to the controller 630.

According to an embodiment, the second image sensor 620 may be an RGB image sensor. The RGB image sensor is an image sensor that acquires color information as a pixel value.

The second image sensor 620 may include three image sensors (CMOS) that obtain information about each color R (Red), G (Green), and B (Blue).

Also, the second image sensor 620 can obtain a relatively high resolution image as compared with the depth image sensor.

The second image sensor 620 captures a second image of the subject, which is applied through the lens 621, under the control of the controller 630. Further, the second image sensor 620 may transmit the feature information extracted from the second image, to the controller 620. [0052] FIG.

In addition, when the first image sensor group 610 and the second image sensor 620 acquire the distance information of the user, the first image sensor group 610 and the second image sensor 620 continuously acquire the distance information of the user, And can recognize the operation of the user through the tracked information.

The controller 630 controls the operation of each module. That is, the controller 630 controls the first image sensor group 610 and the second image sensor 620 to photograph the subject when the photographing start signal using the image sensing unit is received, and analyzes the photographed image And may control the first image sensor group 610 and the second image sensor 620 by loading the setting information in the memory 640. [

The controller 630 is also designed to transmit the extracted feature information to the multimedia device using the interface 650. [ Accordingly, the multimedia device receiving the feature information can acquire the feature information according to the photographed image.

The memory 640 may store set values of the first image sensor group 610 and the second image sensor 620. That is, when a signal for capturing a subject is inputted using the image sensing unit from the user, the image sensing unit analyzes the image input from the image sensing unit through the controller 630, The image sensor setting values may be loaded according to the result to set the shooting environments of the first image sensor group 610 and the second image sensor 620. [

The memory 640 may be, for example, a flash memory or the like, and the interface 650 may be designed with a USB interface, for example, and connected to an external multimedia device.

Through the above-described configuration, the user can input a predetermined image and voice to the multimedia device, and can control the multimedia device through the input image or voice.

10 is a view showing an example of a first image sensor among a plurality of heterogeneous image sensors according to an embodiment of the present invention. Hereinafter, with reference to FIG. 10, an example of a first image sensor among a plurality of heterogeneous image sensors according to an embodiment of the present invention will be described. The IR source 710 shown in Figure 10 may correspond to the emitter 680 of Figure 6 and the depth image processor 720 shown in Figure 10 may correspond to the first image sensor 690 of Figure 9 The description of the bar, Fig. 9 and Fig. 10 may be applied additionally. In addition, the camera shown in FIG. 10 can be designed by borrowing, for example, the structured light method described above.

As shown in FIG. 10, the IR source 710 is designed to project a coded pattern image successively to a target user 730. The depth image processor 720 estimates the position of the user using the information obtained by distorting the original pattern image by the target user 730.

11 is a view showing another example of the first image sensor among the plurality of heterogeneous image sensors according to the embodiment of the present invention. Hereinafter, another example of the first image sensor among the plurality of heterogeneous image sensors according to an embodiment of the present invention will be described with reference to FIG. The LED 810 shown in Fig. 11 may correspond to the emitter 680 of Fig. 9, and the depth image processor 820 shown in Fig. 11 may correspond to the first image sensor 690 of Fig. 9 , The description of Fig. 9 and Fig. 11 may be supplementarily applied. Further, the camera shown in Fig. 11 can be designed, for example, by using the TOF system described above.

As shown in FIG. 11, the light emitted by the LED 810 is transmitted to the target user 830. The reflected light from the target user 830 is then transmitted to the depth image processor 820. Unlike FIG. 10, the modules shown in FIG. 11 calculate the position of the target user 830 using information on a time difference. This will be described in more detail with reference to FIG.

12 is a diagram for explaining a method of calculating a distance using the first image sensor shown in FIG. Hereinafter, a method of calculating the distance using the first image sensor shown in FIG. 11 will be described with reference to FIG.

As shown in the left graph of FIG. 12, the arrival time t value can be obtained through the time difference between the emitted light and the reflected light.

12, the distance between the LED 810 and the target user 830 and the total distance from the target user 830 to the depth image processor 820 are determined by the following equation And the t value. Consequently, the distance to the LED 830 or the depth image processor 820 and the target user 830 is estimated as 1 / d.

FIG. 13 is a view showing an example of the multimedia device shown in FIGS. 1 and 2 in more detail.

The multimedia device 100 shown in FIG. 13 may be connected to a broadcast network and an Internet network. The multimedia device 100 may include, for example, a connected TV, an intelligent TV, a hybrid broad-band TV (HBBTV), a set-top box, a DVD player, a Blu-ray player,

Referring to FIG. 13, a multimedia device 100 according to an embodiment of the present invention includes a broadcast receiving unit 105, an external device interface unit 135, a storage unit 140, a user input interface unit 150, A display unit 180, an audio output unit 185, a power supply unit 190, and an image sensing unit 190. The broadcast receiving unit 105 may include a tuner 110, a demodulator 120, and a network interface unit 130. Of course, the tuner 110 and the demodulator 120 may be provided, but the network interface 130 may not be included. Alternatively, the tuner 110 and the demodulator 120 may be included, It is also possible to design the demodulation unit 120 not to include it.

The tuner 110 selects an RF broadcast signal corresponding to a channel selected by the user or all previously stored channels among RF (Radio Frequency) broadcast signals received through the antenna. Also, the selected RF broadcast signal is converted into an intermediate frequency signal, a baseband image, or a voice signal.

The tuner 110 can receive an RF broadcast signal of a single carrier according to an Advanced Television System Committee (ATSC) scheme or an RF broadcast signal of a plurality of carriers according to a DVB (Digital Video Broadcasting) scheme.

 The demodulation unit 120 may perform demodulation and channel decoding, and then output a stream signal TS. At this time, the stream signal may be a signal in which a video signal, a voice signal, or a data signal is multiplexed. For example, the stream signal may be an MPEG-2 TS (Transport Stream) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, or the like.

The stream signal output from the demodulation unit 120 may be input to the controller 170. The control unit 170 performs demultiplexing, video / audio signal processing, and the like, and then outputs the video to the display unit 180 and audio to the audio output unit 185.

The external device interface unit 135 can connect the multimedia device 100 with an external device.

The external device interface unit 135 can be connected to an external device such as a DVD (Digital Versatile Disk), a Blu-ray, a game device, an image sensor, a camcorder, a computer (notebook) The external device interface unit 135 transmits external video, audio, or data signals to the controller 170 of the multimedia device 100 through a connected external device. Also, the control unit 170 can output the processed video, audio, or data signal to the connected external device. To this end, the external device interface unit 135 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The A / V input / output unit includes a USB terminal, a CVBS (Composite Video Banking Sync) terminal, and a component (not shown) so that the video and audio signals of an external device can be input to the multimedia device 100. [

(Analog) terminal, a DVI (Digital Visual Interface) terminal, an HDMI (High Definition Multimedia Interface) terminal, an RGB terminal, a D-SUB terminal, and the like.

The wireless communication unit can perform short-range wireless communication with other electronic devices. The image display device 100 may be a digital camera such as Bluetooth, Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), ZigBee, Digital Living Network Alliance ) According to the communication standard.

Also, the external device interface unit 135 may be connected to the various set-top boxes via at least one of the various terminals described above to perform input / output operations with the set-top box.

The network interface unit 130 provides an interface for connecting the multimedia device 100 to a wired / wireless network including the Internet network. The network interface unit 130 may include an Ethernet terminal or the like for connection to a wired network and may be a WLAN (Wireless LAN) (Wi- Fi wireless broadband (Wibro), World Interoperability for Microwave Access (Wimax), and HSDPA (High Speed Downlink Packet Access) communication standards.

The network interface unit 130 can transmit or receive data with other users or other electronic devices via a network connected thereto or another network linked to the connected network.

The storage unit 140 may store a program for each signal processing and control in the control unit 170 or may store the processed video, audio, or data signals.

The storage unit 140 may also function to temporarily store video, audio, or data signals input from the external device interface unit 135 or the network interface unit 130. In addition, the storage unit 140 may store information on a predetermined broadcast channel through the channel memory function.

In addition, the storage unit 140 may store user information of the multimedia device. The user information may include at least one of a multimedia device usage time of each user, usage contents and services, sound settings, and image settings.

The storage unit 140 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD Memory, etc.), RAM, ROM (EEPROM, etc.), and the like.

FIG. 13 shows an embodiment in which the storage unit 140 is provided separately from the control unit 170, but the scope of the present invention is not limited thereto. The storage unit 140 may be included in the controller 170.

The user interface unit 150 transmits a signal input by the user to the control unit 170 or a signal from the control unit 170 to the user.

For example, the user interface unit 150 may be configured to turn on / off the power, select the channel, and set the screen from the remote control device 200 according to various communication methods such as a radio frequency (RF) communication method and an infrared Or the like, or to transmit a control signal from the control unit 170 to the remote control device 200. [0064]

The control unit 170 demultiplexes the input stream or processes the demultiplexed signals through the tuner 110 or the demodulation unit 120 or the external device interface unit 135 to generate a signal for video or audio output Can be generated and output.

The video signal processed by the controller 170 may be input to the display unit 180 and displayed as an image corresponding to the video signal. The video signal processed by the controller 170 may be input to the external output device through the external device interface 135. [

The audio signal processed by the control unit 170 may be output to the audio output unit 185 through audio. The voice signal processed by the control unit 170 may be input to the external output device through the external device interface unit 135.

The display unit 180 converts an image signal, a data signal, an OSD signal processed by the control unit 170 or a video signal and a data signal received from the external device interface unit 135 into R, G, and B signals, respectively Thereby generating a driving signal.

The audio output unit 185 receives a signal processed by the control unit 170, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs it as a voice. The voice output unit 185 may be implemented by various types of speakers.

In addition, the audio output unit 185 may include an audio output unit (not shown), and the audio output unit (not shown) may adjust the sound through a beam- A bass enhancement, an ear guard, a music clarity, a noise cancellation and a volume setting (Dynamic Booster) based on the user information previously stored in the memory 140, And three-dimensional stereo sound (3D sound).

The image sensing unit 190 may further include an image sensing unit 190 for sensing a user. The image information photographed by the image sensing unit 190 may be input to the controller 170.

In addition, the image sensing unit 190 may recognize a user currently using the multimedia device through the image input through the image sensing unit 190, and may receive different information And a plurality of image sensors capable of acquiring a plurality of image signals. In this regard, refer to Fig. 9 above.

The control unit 170 may detect the gesture of the user by combining the images captured by the image sensing unit 190 and the sensed signals, respectively.

Accordingly, the control unit 170 includes a recognition unit 171 for recognizing user and user gestures in the image input through the image sensing unit 190, a sensing unit 172 for sensing whether the recognized user is changed, And a priority setting unit 173 for setting the priority user among the plurality of users by the recognized user.

The recognition unit 171 may store data on user recognition information and user gesture recognition information in the database 141. When predetermined information is acquired from the image input through the image sensing unit 190 , And may search the database 141 based on the obtained information to recognize the user and the user gesture.

When the user is recognized through the recognition unit 171, the sensing unit 172 can recognize whether the recognized user is changed by re-recognizing the user at a predetermined time interval. In addition, in order not to interfere with the display screen of the multimedia apparatus, the user can be changed through the background operation, and a predetermined operation can be performed only when the user change is searched. Also, only when the user's movement is detected, the user can be recognized again by detecting the user.

When there are a plurality of users recognized through the recognition unit 171, the priority setting unit 173 may set a control user of the multimedia device or a priority user for adjusting the viewing environment.

That is, the user information of each of the recognized plurality of users is searched, and a user whose preference genre of each user matches the genre of the content currently being used by the multimedia device is set as a priority user through the searched user information , A user who first watched the content among the plurality of users may be set as a priority user.

In addition, the priority setting unit 173 can display a predetermined selection menu and receive a selection signal of a priority user from the user.

According to an embodiment of the present invention, the control unit 170 searches the storage unit 140 for the user information of the user recognized through the recognition unit 171, and searches the user preference genre based on the searched user information To control the sound adjustment and the sound output environment setting using the beamforming of the sound output unit 185 based on the searched favorite genre and to control the setting of the image output environment through the display unit 180 have.

The control unit 170 recognizes the changed user through the image sensing unit 190 and the recognition unit 171 when a user change is detected by the sensing unit 172, An acoustic setting environment in which user information is searched through the storage unit 140, a user preference genre based on the searched user information is searched for and beamforming of the sound output unit is applied based on the searched favorite genre, The viewing environment such as the video output environment can be adjusted.

In addition, the control unit 170 can adjust the viewing environment such as the sound setting, the sound output environment, and the video output environment to which the beam forming of the sound output unit is applied based on the priority user set by the priority setting unit 173 have.

The remote control device 200 transmits the user input to the user interface unit 150. [ To this end, the remote control apparatus 200 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like.

Also, the remote control device 200 can receive the video, audio or data signal output from the user interface unit 150 and display it on the remote control device 200 or output sound or vibration.

14 illustrates a user and multimedia device in accordance with an embodiment of the present invention.

According to the embodiment, the multimedia device 1900 can acquire the user image through the image sensing units 1901 and 1902 of the multimedia device to recognize the user 1903.

In addition, the image sensing unit may include two image sensor modules 1901 and 1902 that acquire different information from each other to accurately recognize a user. That is, according to an embodiment, the image sensing unit may include a depth image sensor, and an RGB image sensor (RGB cam). This will be described in detail below with reference to FIG.

As shown in the figure, the image sensing units 1901 and 1902 of the multimedia device 1900 are positioned at the lower end of the multimedia device and are easy to detect the user's body center, Which allows smooth user recognition.

Through the above-described configuration, the multimedia device acquires information on the user and the user location, and can provide a viewing environment suitable for the user based on the obtained information.

15 is a diagram illustrating an image of a depth image sensor that recognizes coordinates of each part of a user's body according to an embodiment of the present invention.

Referring to FIG. 15, the multimedia device can acquire coordinate information for each part of the user's body through the image photographed through the depth image sensor.

That is, when the user 2401 is photographed through the depth image sensor, the depth image sensor can obtain distance information of each part of the user's body as image information.

For example, coordinate information for only the right elbow 2402 of the user's body can be obtained. In addition, various numerical information such as a head size, a shoulder width, and an arm length of the user can be obtained through the obtained distance information of each part of the user's body, thereby enabling the user to be recognized.

When the face of the user is to be recognized, the coordinates of the head part of the obtained distance information is transmitted to the RGB image sensor of the image sensing part, so that the RGB image sensor detects the head part of the user according to the coordinates of the head part To acquire a zooming image for the object.

In addition, when a user moves a body and performs a specific action, the movement of the distance information of each part of the body can be sensed to recognize the action.

16 is a flowchart illustrating a process of adjusting a viewing environment in a multimedia device according to an embodiment of the present invention.

According to an embodiment, the multimedia device recognizes a user currently using the multimedia device (S2001).

In the case where a signal for adjusting the viewing environment is input from the user or the multimedia device automatically executes the viewing environment adjustment by executing the predetermined content in the multimedia device, Recognize the user using the device.

In addition, the multimedia device may include an image sensing unit for recognizing the user. For smooth user recognition, as described in FIG. 9, the image sensing unit acquires distance information of an object from the image sensor as an image A depth image sensor, and an RGB image sensor that acquires color information as an image.

That is, the user currently using the multimedia device is identified through information on the distance between the user and the multimedia device, user skeleton information on the distance information of each part of the user's body, and user color information.

The multimedia device transmits the information obtained through the image sensing unit of the multimedia device to the user recognition unit of the multimedia device for the user identification and transmits the information obtained through the image sensing unit of the multimedia device and the user recognition It is possible to retrieve the corresponding user identification information in comparison with the information.

Also, in order to smoothly perform the user recognition process, a recognition message as shown in FIG. 11 may be displayed.

Next, the multimedia device retrieves information on the recognized user (S2002).

The multimedia device retrieves previously stored user information corresponding to the recognized user when the user using the current multimedia device is recognized.

The user information may be retrieved through a history of contents and services used by the user in the multimedia device, and may include at least one of a user's multimedia device usage time, usage contents and services, sound settings and image quality settings Information.

Next, the multimedia device analyzes user preference genres in the retrieved user information (S2003).

That is, the multimedia apparatus searches through the information included in the user information to find out what content or service the user has used, the time using the content or the service, and, through the search result, You can analyze your favorite genre.

Next, the multimedia apparatus sets beamforming in the audio output of the multimedia apparatus based on the analyzed user preference genre (S2004).

Generally, the beam-forming technique of the acoustic output is used for the purpose of causing the sound output of the multimedia device to exhibit high directivity in a desired direction from the multimedia device. Once the directivity is formed through beamforming, the user can reduce the acoustic signals transmitted from the directions outside of the beam, and selectively acquire acoustic signals from the directions of interest, thereby configuring a more suitable viewing environment. That is, the user can listen to the desired sound signals more smoothly than the noise.

Therefore, the multimedia device can configure beamforming through the user preference genre and the user distance information, and thus the user can be provided with an appropriate viewing environment.

In addition, the multimedia device may perform sound setting and image setting according to the user information of the recognized user.

In other words, the multimedia apparatus may further include an equalizer, a bass enhancement, an ear guard, a music clarity, a noise reduction, and a volume setting of the multimedia device based on the user information Dynamic Booster) and 3D sound (3D sound).

The multimedia device may further include at least one of a menu screen configuration, a brightness, a contrast, a color, a focus, and an exposure of the multimedia device based on the user information And setting of the user preference channel.

In this case, the multimedia device recognizes the user through the image input through the image sensing unit of the multimedia device and sets the viewing environment so as to correspond to the recognized user, without the user having to perform a separate setting process. The multimedia device can be used in a convenient and better environment.

FIG. 17 is a flowchart illustrating a process of adjusting a viewing environment when there are a plurality of viewers of a multimedia device according to an exemplary embodiment of the present invention.

According to an embodiment, the multimedia device recognizes a user currently using the multimedia device (S2101).

The user recognition step S2101 recognizes the user through the image input through the image sensing unit of the multimedia device, similar to the user recognition step S2001 of Fig.

However, if the number of users recognized by the image input through the image sensing unit of the multimedia device is several, the multimedia device recognizes each of the plurality of users.

Next, the multimedia device retrieves the information about each recognized user (S2102).

When a plurality of users using the current multimedia device are recognized, the multimedia device searches pre-stored user information corresponding to each recognized user.

The user information may be retrieved through a history of contents and services used by the user in the multimedia device, and may include at least one of a user's multimedia device usage time, usage contents and services, sound settings and image quality settings Information.

That is, an identification (ID) is assigned to each user, and user information of the corresponding user is retrieved for each user ID.

Next, the multimedia device analyzes user preference genres of each user in the searched user information (S2103).

That is, the multimedia device searches through the information included in the respective pieces of user information to find out what contents or services are used by each user, how much the contents or service use time of each user is, It is possible to search through which genre contents or services each user prefers and to understand the preference genre of each user according to the search result.

Next, the multimedia device sets beamforming in the audio output of the multimedia device based on the analyzed user preference genre based on the priority user among the users (S2104).

That is, since the directivity of the beamforming set according to the position of the user differs, the directivity of the beamforming is set based on the priority user among the plurality of users. A process of setting a priority user among the plurality of users will be described in detail with reference to FIG.

Accordingly, even when there are two or more users currently using the multimedia device, the multimedia device can configure the beamforming according to the priority user through the user preference genre and the user distance information, A suitable viewing environment can be provided.

In addition, it is the same as the viewing environment setting in step S2004 of FIG. 8 that the multimedia device can perform sound setting and image setting according to the user information of the recognized priority user.

FIG. 18 is a flowchart showing in detail the step of setting beamforming (S2104) according to the preferential user of FIG.

According to an embodiment, when there are a plurality of users recognized in the input image through the image sensing unit of the multimedia device, a priority user for setting the viewing environment for the plurality of users can be set.

Accordingly, when the preferred genre of each user is analyzed through the preferred genre analysis step (S2103) of each user, the genre of the currently used content is searched (S2201).

That is, the multimedia device retrieves the genre of the currently-used content through the information included in the additional information of the content currently being used in the multimedia device. The additional information may be information included in a broadcast signal when the content being used is a terrestrial broadcast, and may be information included in the file when the content is a video on demand (VoD) have.

Next, the genre of the content currently used in the searched multimedia device is compared with the preference genre of each user searched in step S2103, and it is searched whether there is a user whose genre matches the genre (S2202).

As a result of the search (S2202), if there is a user having the same genre, the user having the genre matching is set as a priority user among a plurality of users currently using the multimedia device (S2204).

That is, when the genre of the content currently being used in the multimedia device matches the preference genre of the user, the matching user is directly using the content, so that the matching user is set as the priority user, The viewing environment can be set based on the user who uses the content.

According to the embodiment, when there are a plurality of users whose genres match each other, the user selection menu shown in FIG. 22 may be displayed to select a priority user from the user.

On the other hand, if there is no matching user in the genre in step S2202, the multimedia device searches for the first viewer among the recognized plurality of users (S2203).

That is, the multimedia apparatus searches for information on the usage start time among the user information found in step S2101, and determines the first viewer.

Also, the first viewer is set as a priority user (S2205).

That is, if there is no user with the same genre, the user who is currently viewing the multimedia device at the earliest among the users currently using the multimedia device is directly using the content currently being used in the multimedia device, Can be set as a priority user so that the viewing environment can be set based on the user directly using the content.

Next, acoustic beamforming, image setting, and sound setting are performed based on the priority user (S2206).

That is, when the priority user is set, an operation for adjusting the viewing environment such as an acoustic beamforming setting, a video output setting, and a sound output setting according to the user information of the priority user is performed as in the setting step S2104 of FIG. .

Accordingly, even when there are a plurality of users using the multimedia device, the multimedia device can appropriately determine the priority user and perform a proper viewing environment setting.

19 is a view illustrating a display screen 2300 including a user recognition message according to an embodiment of the present invention.

According to the embodiment, when the user inputs a predetermined command for adjusting the viewing environment or automatically adjusts the viewing environment in the multimedia device, the multimedia device performs the user recognition for the adjustment of the viewing environment, A recognition message 2301 can be displayed.

The multimedia device recognizes the user in the image input through the image sensing unit of the multimedia device, for the purpose of adjusting the viewing environment. Therefore, when the user moves during the input of the image and does not obtain an appropriate image, or when the user executes another operation in the multimedia device while recognizing the user in the input image, the user recognition may not proceed smoothly , The recognition message 2301 may be displayed to facilitate smooth user recognition.

In addition, if the user desires to cancel the user recognition, the user can cancel the selection by selecting the cancel menu item 2302. [

Also, when a selection signal of the cancel menu item 2302 is received, the multimedia device may display a menu for canceling the viewing environment adjustment or adjusting the viewing environment manually according to the setting, It is possible to receive the position selection signal and perform the viewing environment adjustment.

20 is a view illustrating a display screen 2400 including a user setting application message according to an embodiment of the present invention.

According to an embodiment of the present invention, when a user is recognized in an image input through the image sensing unit of the multimedia device, a notification message 2401 informing that the viewing environment is being set according to the user information of the recognized user can be displayed.

When the notification message 2401 is displayed, the user recognizes the user in the multimedia device and adjusts the viewing environment based on the user setting of the recognized user.

Accordingly, if it is not desired to adjust the viewing environment, the user can select the cancel menu item 2402 to stop the adjustment of the viewing environment.

Also, the notification message 2401 may include image information or character information of the recognized user. For example, an image of a part where a user is present may be cropped and included as image information through an image sensing unit of the multimedia device, and the user name, preference genre, etc. of the recognized user Information can be included as character information.

In addition, the information about the viewing environment according to the user information of the recognized user may be included.

In addition, the notification message 2401 may be displayed in a translucent color or minimized in a predetermined area of the display unit, so as not to interfere with the display screen of the content or service currently used in the multimedia device.

Accordingly, the user can perform the adjustment of the viewing environment through the background message through the notification message 2401 while the viewing environment is being adjusted during the use of the predetermined content in the multimedia device through the background operation And can determine whether to proceed with the viewing environment adjustment.

FIG. 21 is a view showing a display screen 2500 including a confirmation menu for applying a user setting according to an embodiment of the present invention.

According to an embodiment of the present invention, when the multimedia device recognizes a user in the input image through the image sensing unit of the multimedia device, the multimedia device inputs a confirmation command as to whether or not to set the viewing environment of the multimedia device according to the user information of the recognized user An acceptance confirmation menu 2501 can be displayed.

The confirmation menu 2501 may include image information or character information of the recognized user, and may include viewing environment setting information of the recognized user. That is, for example, an image of a part where a user is present may be cropped by the image sensing unit of the multimedia device, and may be included as image information, and the user name, Can be included as character information.

When the confirmation menu item 2502 is selected from the confirmation menu 2501, the viewing environment of the multimedia device is set according to the user information of the recognized user. When the cancel menu item 2503 is selected , Re-recognizes the user through the image sensing unit of the multimedia device, erases the confirmation menu 2501, and ends the operation for setting the viewing environment.

Also, the confirmation menu 2501 may be displayed in a translucent color or minimized in a predetermined area of the display unit, so as not to interfere with the display screen of the content or service currently used in the multimedia device.

Accordingly, when the user is mistakenly recognized through the image sensing unit of the multimedia device, the user can prevent the viewing environment from being set according to the mistaken user, thereby improving reliability.

22 is a view illustrating a display screen 2600 including a user selection menu according to an embodiment of the present invention.

According to the embodiment, when there are a plurality of users recognized by the image input through the image sensing unit of the multimedia device, among the plurality of recognized users, a selection menu for selecting a priority user in the viewing environment setting The display unit 2601 can be displayed.

That is, according to the embodiment, the selection menu 2601 may be displayed to select a priority user when there are a plurality of users whose genres match each other in step S2204 of FIG. 18, , The selection menu 2601 may be displayed in step S2104 of FIG. 17 to set the priority user immediately without going through the steps of FIG.

The selection menu 2601 may include information 2602, 2603, and 2604 for the recognized plurality of users.

That is, when a plurality of users are recognized in the input image through the image sensing unit of the multimedia device, the multimedia device extracts the cropped image of the portion for recognizing the user in the image, It is possible to display it in the selection menu 2601 and receive the selection signal of the priority user from the user.

Also, it is possible to display information including the recognized user along with the cropped image.

The user can select a priority user via the pointer 2704 in a state where the selection menu 2601 is displayed.

The multimedia device loads viewing preference information for the selected user when the selection signal is received, and provides an appropriate viewing environment to the user according to the loaded information.

23 is a view showing a display screen 2700 including a personalized initial screen according to an embodiment of the present invention.

According to an embodiment, when a user of the multimedia device is recognized, the multimedia device may provide a personalized initial screen for each recognized user.

That is, for example, as shown in FIG. 23, a personalized menu 2701 can be displayed.

The personalized menu 2701 may include a preferred channel list 2702 and a preferred application list 2703. [

The preferred channel list 2702 can be configured to retrieve and configure the channel usage time of the user based on the user information pre-stored in the multimedia device, to select a channel item to be included in the preferred channel list 2702 Signal can be received and configured.

The preferred application list 2703 can be configured to retrieve and configure the application execution time and frequency of use of the user based on the user information previously stored in the multimedia device, And receive a selection signal for the item.

The user may also move the pointer 2704 through the user input interface to select the channel item contained in the preferred channel list 2702 or the application item contained in the preferred application list 2703, The user can directly perform an operation for using the desired content or service in step 2701.

That is, the multimedia device provides the personalized menu to the user currently using the multimedia device according to the user information recognized to configure the viewing environment according to the embodiment of the present invention. The user can conveniently use the multimedia device have.

The multimedia device and its operation method according to the present invention are not limited to the configuration and method of the embodiments described above but the embodiments may be modified so that all or part of each embodiment Or may be selectively combined.

Meanwhile, the method of operating the multimedia device of the present invention can be implemented as a code that can be read by a processor on a recording medium readable by a processor included in the multimedia device. The processor-readable recording medium includes all kinds of recording apparatuses in which data that can be read by the processor is stored. Examples of the recording medium that can be read by the processor include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet . In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the present invention.

100: Multimedia device
105: Broadcast receiver
110: tuner
120: demodulator
130: Network interface unit
135: External device interface unit
140:
150: user input interface unit
170:
180:
185: Audio output section
190: Power supply
200: remote control device

Claims (20)

delete delete delete delete delete delete A method of providing an optimal viewing environment of a multimedia device,
Recognizing a user using the multimedia device;
Retrieving user information of the recognized user when a plurality of users are recognized;
Searching for a favorite genre of each user based on the searched user information;
Retrieving a genre of content currently in use in the multimedia device;
Setting a user having a matching genre of the currently used content and the searched favorite genre as a priority user; And
Constructing an acoustic beamforming (Beam-forming) based on the preference user's preferred genre
And providing the optimal viewing environment of the multimedia device.
delete delete A method of providing an optimal viewing environment of a multimedia device,
Recognizing a user using the multimedia device;
Retrieving user information of the recognized user when a plurality of users are recognized;
Searching for a favorite genre of each user based on the searched user information;
Retrieving the start time of using the multimedia device of each user;
Setting a user whose usage start time is earliest as a priority user; And
Constructing an acoustic beamforming (Beam-forming) based on the preference user's preferred genre
And providing the optimal viewing environment of the multimedia device.
8. The method of claim 7,
Retrieving whether the recognized user is changed;
If the user change is detected, returning to the step of recognizing the user
The method comprising the steps of:
An image sensing unit for acquiring user image information;
A recognition unit for recognizing a user based on the data acquired by the image sensing unit;
A storage unit for storing user information for each user;
A priority setting unit configured to set a priority user among the plurality of users when a plurality of users are recognized through the recognition unit;
An acoustic output unit adapted to adjust sound by applying beam-forming; And
Searching the user information of the user recognized through the recognition unit through the storage unit, searching for a user preference genre based on the searched user information, and controlling sound adjustment applying beamforming of the sound output unit based on the searched favorite genre And a controller,
Wherein the priority setting unit compares a genre of a content currently used in the multimedia device with a preference genre of each recognized user to set a user having the genre as a priority user,
Wherein the control unit controls the beamforming configuration of the sound output unit based on the preference genre of the set priority user,
Multimedia device that can adjust viewing environment
13. The method of claim 12,
Wherein,
Based on the user information, controlling the acoustic output environment setting of the sound output section
Multimedia device that can adjust viewing environment
13. The method of claim 12,
Wherein,
And controlling the video output environment setting of the multimedia device based on the user information
Multimedia device that can adjust viewing environment
13. The method of claim 12,
The image sensing unit includes:
A depth image sensor capable of obtaining the user distance information; And
An RGB image sensor capable of obtaining color information of the user
A multimedia device capable of adjusting the viewing environment
13. The method of claim 12,
The user information includes:
And at least one of the multimedia device usage time, the usage content, the sound setting, and the image quality setting of each user
Multimedia device that can adjust viewing environment.
13. The method of claim 12,
The multimedia device includes:
A sensing unit for sensing whether the user is changed,
Further comprising:
Wherein,
A user recognition unit for recognizing the changed user through the image sensing unit and the recognition unit when the user change is detected by the sensing unit, searching through the storage unit for the user information of the re-recognized user, Searching for a genre, and controlling an acoustic re-arrangement to which beamforming of the sound output section is applied based on the searched favorite genre
Multimedia device that can adjust viewing environment.
delete delete An image sensing unit for acquiring user image information;
A recognition unit for recognizing a user based on the data acquired by the image sensing unit;
A storage unit for storing user information for each user;
A priority setting unit configured to set a priority user among the plurality of users when a plurality of users are recognized through the recognition unit;
An acoustic output unit adapted to adjust sound by applying beam-forming; And
Searching the user information of the user recognized through the recognition unit through the storage unit, searching for a user preference genre based on the searched user information, and controlling sound adjustment applying beamforming of the sound output unit based on the searched favorite genre And a controller
Wherein the priority setting unit searches the multimedia device start time of each user to set the user whose start time of use is the earliest as the priority user,
Wherein the control unit controls the beamforming configuration of the sound output unit based on the preference genre of the set priority user,
Multimedia device that can adjust viewing environment.
KR1020100112530A 2010-11-12 2010-11-12 Method and apparatus for providing optimized viewing conditions in multimedia device KR101741550B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020100112530A KR101741550B1 (en) 2010-11-12 2010-11-12 Method and apparatus for providing optimized viewing conditions in multimedia device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100112530A KR101741550B1 (en) 2010-11-12 2010-11-12 Method and apparatus for providing optimized viewing conditions in multimedia device

Publications (2)

Publication Number Publication Date
KR20120051210A KR20120051210A (en) 2012-05-22
KR101741550B1 true KR101741550B1 (en) 2017-06-15

Family

ID=46268329

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100112530A KR101741550B1 (en) 2010-11-12 2010-11-12 Method and apparatus for providing optimized viewing conditions in multimedia device

Country Status (1)

Country Link
KR (1) KR101741550B1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3422306A1 (en) 2004-11-17 2019-01-02 Arthur J. Zito, Jr. User-specific dispensing system
US7398921B2 (en) 2004-11-17 2008-07-15 Zito Jr Arthur J User-specific dispensing system
US11553228B2 (en) 2013-03-06 2023-01-10 Arthur J. Zito, Jr. Multi-media presentation system
KR102170254B1 (en) 2013-12-10 2020-10-26 삼성전자주식회사 Adaptive beam selection apparatus and method in a wireless communication system
KR20150108028A (en) 2014-03-16 2015-09-24 삼성전자주식회사 Control method for playing contents and contents playing apparatus for performing the same
KR102342081B1 (en) 2015-04-22 2021-12-23 삼성디스플레이 주식회사 Multimedia device and method for driving the same

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006324809A (en) * 2005-05-17 2006-11-30 Sony Corp Information processor, information processing method, and computer program
JP2007282191A (en) * 2006-03-14 2007-10-25 Seiko Epson Corp Guide apparatus and method of controlling the same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006324809A (en) * 2005-05-17 2006-11-30 Sony Corp Information processor, information processing method, and computer program
JP2007282191A (en) * 2006-03-14 2007-10-25 Seiko Epson Corp Guide apparatus and method of controlling the same

Also Published As

Publication number Publication date
KR20120051210A (en) 2012-05-22

Similar Documents

Publication Publication Date Title
KR101731346B1 (en) Method for providing display image in multimedia device and thereof
EP2453384B1 (en) Method and apparatus for performing gesture recognition using object in multimedia device
KR20120051212A (en) Method for user gesture recognition in multimedia device and multimedia device thereof
US8577092B2 (en) Multimedia device, multiple image sensors having different types and method for controlling the same
US9250707B2 (en) Image display apparatus and method for operating the same
US9025023B2 (en) Method for processing image data in television having multiple image sensors and the television for controlling the same
US9390714B2 (en) Control method using voice and gesture in multimedia device and multimedia device thereof
KR101758271B1 (en) Method for recognizing user gesture in multimedia device and multimedia device thereof
US8830302B2 (en) Gesture-based user interface method and apparatus
KR101899597B1 (en) Method for searching object information and dispaly apparatus thereof
KR101741550B1 (en) Method and apparatus for providing optimized viewing conditions in multimedia device
US20120268424A1 (en) Method and apparatus for recognizing gesture of image display device
US20130263048A1 (en) Display control apparatus, program and display control method
KR101772456B1 (en) Multimedia device, multiple image sensors having different types and the method for controlling the same
KR101819499B1 (en) Multimedia device, multiple image sensors having different types and the method for controlling the same
KR20120051213A (en) Method for image photographing of multimedia device and multimedia device thereof
KR20210002797A (en) Display device
KR20120050614A (en) Multimedia device, multiple image sensors having different types and the method for controlling the same
KR101629324B1 (en) Multimedia device, multiple image sensors having different types and the method for controlling the same
KR20150012677A (en) multimedia apparatus and method for predicting user command using the same
KR20120074484A (en) Multimedia device for processing data by using image sensor and the method for controlling the same
KR101759936B1 (en) Multimedia device using multiple image sensors having different types and method for controlling the same
KR20150059402A (en) multimedia apparatus and method for displaying pointer thereof
KR20120102337A (en) Electronic device and method for managing control right
KR20110094727A (en) Apparatus and method for detecting motion data of infrared ray remocon

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant