JP6030945B2 - Viewer video display control device, viewer video display control method, and viewer video display control program - Google Patents

Viewer video display control device, viewer video display control method, and viewer video display control program Download PDF

Info

Publication number
JP6030945B2
JP6030945B2 JP2012277959A JP2012277959A JP6030945B2 JP 6030945 B2 JP6030945 B2 JP 6030945B2 JP 2012277959 A JP2012277959 A JP 2012277959A JP 2012277959 A JP2012277959 A JP 2012277959A JP 6030945 B2 JP6030945 B2 JP 6030945B2
Authority
JP
Japan
Prior art keywords
video
viewer
image
sign language
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2012277959A
Other languages
Japanese (ja)
Other versions
JP2014123818A (en
Inventor
美佐 平尾
美佐 平尾
陽子 石井
陽子 石井
宮崎 泰彦
泰彦 宮崎
小林 透
透 小林
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2012277959A priority Critical patent/JP6030945B2/en
Publication of JP2014123818A publication Critical patent/JP2014123818A/en
Application granted granted Critical
Publication of JP6030945B2 publication Critical patent/JP6030945B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention is a viewing method that facilitates communication when a user who communicates using sign language or spoken language, such as a user with hearing impairment or a user of his / her family member, views content video including television broadcasting. The present invention relates to a viewer video display control device, a viewer video display control method, and a viewer video display control program.
  The act of “conversing with family while watching TV” is a very natural act for a listener (a person who has no hearing impairment). This is because the listeners can communicate with each other only by voice, and can communicate without interrupting the viewing of the content video. On the other hand, people with hearing disabilities and their family members who communicate by sign language or spoken language try to communicate when viewing content video. ”,“ Devise the sitting position so that each other's appearance, sign language and facial expressions are easy to see ”. This is because sign language and spoken language are communication methods established by looking at the other person's eyes, and in order to have a conversation when viewing the content video, it is necessary to interrupt viewing and face the other person.
  For such a problem, for example, it is conceivable to apply a system used for remote communication as in Non-Patent Document 1.
JP 2008-217536 JP
  As described above, when a hearing impaired person or a user of his or her family tries to communicate when viewing a content image such as a television, various restrictions are involved, and it is difficult to realize smooth communication.
  The system of Patent Document 1 is a system used for remote communication, and is not considered for smooth communication of persons with hearing disabilities and the like when viewing content video.
  For example, the self-image displayed on the screen is always displayed without being controlled to be displayed / hidden according to the state of the user. However, while the user is viewing content video, video captured by the video camera (hereinafter referred to as user video) depends on the situation such as the user having a conversation such as sign language or spoken language. Need to be controlled to show / hide. This is because the user video does not need to be displayed while the user is not talking, and it is desirable that the user video is not displayed in consideration of the legibility of the content video, text information such as telop and subtitles.
  In addition, even when the user is not talking, it is desirable that the user video is displayed if there is a change in the user's facial expression. To be able to communicate more smoothly by noticing the change in each other's facial expressions through user images, and as a starting point for conversations and judging that they should refrain from talking to each other. is there.
  The present invention has been made in view of the above circumstances, and an object of the present invention is to provide a viewer image display control apparatus and a viewing device that facilitate communication when a user with hearing impairment views content images. It is to provide a viewer video display control method and a viewer video display control program.
In order to achieve the above object, the present invention provides a viewer video display control apparatus that controls display of a viewer's video, and that captures a camera video input from a camera that shoots a viewer watching the content video. A video analysis unit that analyzes and detects the viewer's video; a TV content determination unit that determines whether the content video is in a CM; and the viewer uses the camera video to sign language a sign language detection unit for detecting whether dolphins not, when the determination by the TV content judgment unit is not in the CM, at a timing when the sign language detection unit detects that the sign language, the viewer is the sign language during only example Bei and a video synthesis unit for synthesizing by superimposing the sign language video using the video of the viewer to the content image, the sign language video, a sign language when the viewer is performing sign language It is an image in which the operation can be visually recognized .
In the viewer image display control device, when the determination by the TV content judgment unit is in the CM, the video synthesis unit is synthesized by superimposing the sign language video using the video of the viewer to the content image May be.
In the viewer video display control device, further comprising a facial expression change detection unit that detects a facial expression change of the viewer using the camera video, when the judgment by the TV content judgment unit is not in the CM, When the viewer is not sign language, the video composition unit superimposes a facial expression video using the viewer's video on the content video at a timing when the facial expression change detection unit detects the facial expression change. You may synthesize .
In the viewer video display control device, a facial expression change detection unit that detects a facial expression change of the viewer using the camera video, and a sign language video using the viewer video or the viewer video. An emphasis unit that performs an emphasis process for emphasizing the used facial expression video for a predetermined time after the sign language detection unit or the facial expression change detection unit detects, and the video composition unit includes the enhancement process A sign language image or a facial expression image that has been marked may be combined with the content image .
The present invention is a viewer video display control method for controlling display of a viewer's video performed by a computer, analyzing a camera video input from a camera that shoots a viewer watching a content video, The video analysis step for detecting the viewer's video, the TV content determination step for determining whether or not the content video is in a CM, and the TV content determination step when the TV content determination step is determined not to be in a CM person at a timing starting the sign language, only while the viewer is the sign language, we have rows, and image synthesis step of synthesizing superimposed on the content image a sign language video using the video of the viewer, The sign language image is an image in which an operation of sign language can be visually recognized when the viewer is performing sign language .
In the viewer video display control method, when it is determined in the TV content determination step that CM is being performed, the video synthesis step synthesizes a sign language video using the viewer's video by superimposing it on the content video. May be.
In the viewer video display control method, when it is determined that the TV content determination step is not under CM, and the viewer is not sign language, the video composition step is performed on the face of the viewer. A facial expression video using the viewer's video may be superimposed on the content video at the timing when the facial expression changes .
In the viewer video display control method, the sign language video using the detected viewer video is emphasized for a predetermined time from the start of sign language, or the detected viewer video is used. An emphasis step is further performed for emphasizing the facial expression video for a predetermined time from the facial expression change, and the video synthesis step synthesizes the sign language video or facial expression video subjected to the enhancement step by superimposing it on the content video. May be.
The present invention is a viewer video display control program that causes a computer to function as each unit included in the viewer video display control device.
  According to the present invention, there are provided a viewer video display control device, a viewer video display control method, and a viewer video display control program that facilitate communication when a user with hearing impairments views content video. can do.
1 is a configuration diagram showing an overall configuration of a system according to an embodiment of the present invention. It is a block diagram which shows the structure of a control apparatus. It is a block diagram which shows the structure of a sign language judgment part and a facial expression change judgment part. It is a block diagram which shows the structure of a user image | video production | generation part. It is a flowchart showing the process of a TV content judgment part and a sign language judgment part. It is a flowchart showing the process of a facial expression change judgment part. It is a schematic diagram which shows the process of a user image | video production | generation part. It is an image figure of the example of a screen on which a sign language image or a facial expression image was displayed. It is an image figure of the example of a screen on which a sign language image or a facial expression image was displayed.
  Embodiments of the present invention will be described below with reference to the drawings.
  FIG. 1 is an overall configuration diagram of a system according to an embodiment of the present invention. The illustrated system includes a screen 1 that displays content video, a video camera 2 that is installed near the screen 1 and captures a user (viewer) who is viewing the content video, a camera input interface 3, and a screen. Video content output device 8 for outputting content video such as an output interface 4, a remote controller 5 used by a user, a user input interface 6, a control device 7 (viewer video display control device), a terrestrial digital television broadcast receiver, and the like. With.
  A camera image taken by the video camera 2 is input to the control device 7 via the camera input interface 3. In addition, the content video output from the video content output device 8 is input to the control device 7. Further, user input data (instruction information, setting information, etc.) input by the user using the remote controller 5 is input to the control device 7 via the user input interface 6.
  The control device 7 performs detection of the user's face and upper body from the input camera video, detection of the user's sign language operation, detection of a change in facial expression, detection of a television CM in the content video, and the like. The control device 7 then superimposes the sign language video on the content video that the user is viewing while the user is sign language and during the TV commercial, and outputs it to the screen 1 via the screen output interface 4. To do. Further, the control device 7 does not sign language, but when there is a change in facial expression, the facial expression video is superimposed on the content video being viewed by the user and is synthesized and output to the screen 1 via the screen output interface 4. To do.
  As the control device 7, for example, a PC connected to the screen 1 or a browser mounted on a television can be used.
  Next, details of the control device 7 will be described with reference to FIG. FIG. 2 is a block diagram showing the configuration of the control device 7. The illustrated control device 7 includes a camera video analysis unit 71, a TV content determination unit 72, a sign language determination unit 73, a facial expression change determination unit 74, a user video generation unit 75, and a video synthesis unit 76.
  The video camera 2 captures a user who views the content video displayed on the screen 1 and inputs the captured camera video to the camera video analysis unit 71 in units of frames. The camera video analysis unit 71 performs image analysis on the input camera video frame and detects a user video (face, upper body, etc.) for each user. Then, the coordinate information indicating the image area of each user's face and upper body and the camera video frame are output to the TV content determination unit 72. The TV content determination unit 72 detects whether the content video output from the video content output device 8 is being commercialized, that is, has been switched to CM.
  The sign language determination unit 73 detects whether or not the user is sign language using the camera video frame. The expression change determination unit 74 detects a change in the expression of the user's face using the camera video frame. The user video generation unit 75 generates a user sign language video or facial expression video using the user's face and upper body detected by the camera video analysis unit 71. The video synthesizing unit 76 superimposes the user sign language video and the facial expression video on the content video, synthesizes them, outputs them to the screen 1 via the screen output interface 4, and displays them on the screen 1.
  FIG. 3 is a block diagram illustrating configurations of the sign language determination unit 73 and the facial expression change determination unit 74 of the control device 7. The sign language determination unit 73 illustrated includes a sign language detection unit 731 and a sign language non-detection time reference unit 732, and the expression change determination unit 74 includes an expression change detection unit 741, an expression change detection time reference unit 742, and icon determination. Part 743. These processes will be described later with reference to FIGS.
  FIG. 4 is a block diagram illustrating a configuration of the user video generation unit 75 of the control device 7. The illustrated user video generation unit 75 includes a user video extraction unit 751, a user video effect processing unit 752, a user video size determination unit 753, a user video position coordinate determination unit 754, and a character information detection unit 755. These processes will be described later with reference to FIG.
  As the control device 7 described above, for example, a general-purpose computer system including a CPU, a memory, an external storage device such as an HDD, an input device, and an output device can be used. In this computer system, each function of the control device 7 is realized by the CPU executing a program for the control device 7 loaded on the memory. The program for the control device 7 can be stored in a computer-readable recording medium such as a hard disk, flexible disk, CD-ROM, MO, DVD-ROM, or distributed via a network.
  Next, the process of the control apparatus 7 of this embodiment is demonstrated.
  First, the video camera 2 photographs a user who is viewing content. This camera video is input to the camera video analysis unit 71 of the control device 7 in units of frames via the camera input interface 3.
  The camera image analysis unit 71 detects a user image (here, face and upper body) for each user from the input camera image frame. Then, the camera video analysis unit 71 outputs coordinate information representing the image area of the user's face and upper body and the camera video frame to the TV content determination unit 72. In addition, in order to detect a user's face and upper body, the technique like the following reference literature 1 can be used, for example. Specifically, feature amounts are extracted in accordance with the face orientation and similarity is calculated using the feature amounts, and the user's video is recognized and detected based on the calculated similarities.
[Reference Document 1]: Japanese Unexamined Patent Application Publication No. 2009-157766 FIG. 5 is a flowchart illustrating processing performed by the TV content determination unit 72 and the sign language determination unit 73. First, the content content output from the video content output device 8 is input to the TV content determination unit 72 in units of frames, and the camera video analysis unit 71 receives coordinate information indicating the user's face and upper body image area and the camera. Video frames are input in units of frames (S11).
  The TV content determination unit 72 determines whether or not the input content video frame is a television commercial (S12). Whether or not the input content video frame is a television commercial can be determined using, for example, a technique such as Reference Document 2 below.
[Reference 2]: Take Komoe, Shinichi Sato, “Research on ultra-high-speed CM detection and its application to knowledge discovery”, IEICE technical report, June 2011, PRMU2011-53, p. 119-124
In the present embodiment, when the content video is being commercialized, the sign language video is superimposed on the content video and displayed on the screen 1 regardless of whether or not the user is sign language. The sign language video is a video of the user who can visually recognize the sign language operation when the user is performing the sign language. In the present embodiment, the sign language video is a video including the user's face and upper body. When the content video is not in the CM, the sign language video is superimposed on the content video and displayed on the screen 1 only while the user is sign language.
  In the present embodiment, when a transition is made from a state in which no sign language is being performed to a state in which sign language is being performed (that is, when sign language is started), the sign language is not transmitted for a predetermined time (t1 second) from the start of the sign language. A marker for performing effect processing (enhancement processing) for notifying the user of the start is set in the camera video frame.
  Specifically, when the input content video frame is CM (S12: YES), the TV content determination unit 72 includes the camera video frame input in S11, coordinate information representing the user's face and upper body image area, and Is output to the sign language determination unit 73. The sign language detection unit 731 of the sign language determination unit 73 determines whether or not the user is sign language using the input camera video frame (S13). Whether or not the user is sign language can be determined using a technique such as the following Reference 3, for example.
[Reference 3]: Hiroshi Yamada, Naoshi Matsuo, Nobutaka Shimada, Yoshiaki Shirai “Hand Region Detection and Shape Identification by Learning of Signs for Sign Language Recognition”, Image Recognition and Understanding Symposium, July 2009, MIRU2009, p. 635-642
When sign language is not performed in the input camera image frame (S13: NO), the sign language non-detection time reference unit 732 of the sign language determination unit 73 stores the current time as n1 in a storage unit such as a memory (S14). Then, the camera video frame and coordinate information representing the image area of the user's face and upper body are output to the user video generation unit 75 (S15).
  When sign language is performed in the input camera video frame (when there are a plurality of users, at least one of them is sign language) (S13: YES), sign language non-detection time reference unit 732 Compares the current time with n1 stored immediately before in the storage unit, and calculates the difference. When the calculated difference is within t1 seconds specified in advance (S16: YES), it is determined that it is within t1 seconds from the start of sign language. In this case, the sign language non-detection time reference unit 732 performs an effect process for notifying other users who are viewing the content video that the sign language has started (calling attention). An arbitrary marker is attached to the camera video frame input in step S3, and is output to the user video generation unit 75 together with coordinate information representing the image area of the user's face and upper body (S17).
  When the difference between n1 stored immediately before and the current time exceeds t1 seconds (S16: NO), the sign language non-detection time reference unit 732 determines that t1 seconds have elapsed since the start of sign language, Without adding, the camera video frame and the coordinate information representing the image area of the user's face and upper body are output to the user video generation unit 75 (S15).
  On the other hand, when the input content video frame is not a CM (S12: NO), the TV content determination unit 72 outputs the camera video frame and coordinate information representing the image area of the user's face and upper body to the sign language determination unit 73. . The sign language detection unit 731 of the sign language determination unit 73 determines whether or not the user is sign language using the input camera video frame (S18).
  When sign language is not performed in the input camera video frame (S18: NO), the sign language non-detection time reference unit 732 of the sign language determination unit 73 stores the current time as n2 in a storage unit such as a memory (S22). Then, the camera image frame and the coordinate information representing the image area of the user's face are output to the expression change determination unit 74 (S23).
  When sign language is performed in the input camera video frame (when there are a plurality of users, at least one of them is sign language) (S18: YES), sign language non-detection time reference unit 732 Compares the current time with n2 stored immediately before in the storage unit, and if the difference is within t1 seconds specified in advance (S19: YES), determines that it is within t1 seconds from the start of sign language, In order to make other users viewing the content video notice that sign language has started, an arbitrary marker is attached to the camera video frame input in S11, and coordinate information representing the image area of the user's face and upper body And it outputs to a user image | video production | generation part (S20).
  When the difference between n2 stored immediately before and the current time exceeds t1 seconds (S19: NO), the sign language non-detection time reference unit 732 determines that t1 seconds have elapsed since the start of sign language, and the marker Without adding a mark, coordinate information representing the camera video frame and the image area of the user's face and upper body is output to the user video generation unit 75 (S21).
  Note that the processing in FIG. 5 is repeatedly performed for each frame of the input camera video and content video.
  FIG. 6 is a flowchart showing processing of the facial expression change determination unit 74.
  In the present embodiment, when the content video is not in the CM and the user does not sign language and the facial expression changes, the content video is displayed for a predetermined time (t2 seconds). It is assumed that the image is superimposed on the video and displayed on the screen 1. In the present embodiment, a marker for performing effect processing (enhancement processing) for notifying the user that an expression change has occurred is set in the camera video frame.
  The expression change detection unit 741 of the expression change determination unit 74 receives the camera video frame output by the sign language determination unit 73 and the coordinate information representing the image area of the user's face in S23 of FIG. 4 (S31). .
  The facial expression change detection unit 741 detects whether a facial expression change of the user has occurred using the input camera video frame (S32). For example, a technique such as Reference 4 can be used to detect a change in the facial expression of the user.
[Reference 4]: Hiroshi Ota, Hitoshi Saji, Hiromasa Nakatani “Recognition of facial expression changes by facial component model based on facial muscles”, IEICE Transactions. D-II, Information / System, II-Pattern Processing Vol. J82-D-II (7), pp. 1129-1139, July 1999 When a change in facial expression is detected (S32: YES), the facial expression change detection unit 741 performs coordinate information representing the camera video frame and the image area of the user's face, and what kind of facial expression change Together with a tag A indicating whether the expression has changed (for example, whether it has become a smile or a surprised face), is output to the expression change detection time reference unit 742. The expression change detection time reference unit 742 stores the current time as n3 in a storage unit such as a memory, and also stores the input camera video frame and the coordinate information indicating the image area of the user's face in the storage unit. Is output to the icon determining unit 743 (S33).
  When there is a change in facial expression, the icon determination unit 743 determines whether the facial expression video displayed on the screen 1 is the user's facial image acquired from the camera video frame or an arbitrary icon representing the user's facial expression. A determination is made (S34). It should be noted that the icon determination unit 743 determines whether to use the video of the camera video frame or the icon based on the setting information set by the user. The user uses the remote controller 5 or the like in advance (or while viewing the content video) to set in the icon determination unit 743 whether to use the video of the camera video frame or the icon.
  The facial expression video is an image in which the user's facial expression can be visually recognized. In this embodiment, the facial expression video is a video including the user's face in the case of a video acquired from a camera video frame, and in the case of an icon, Assume that facial expressions are understood.
  When the user has set the facial expression video of the camera video frame (S34: NO), the icon determination unit 743 outputs the camera video frame and the coordinate information representing the image area of the user's face to the user video generation unit 75. (S35). Then, in order to make another user who is viewing the content video notice that the expression change has occurred, the icon determination unit 743 attaches an arbitrary marker to the camera video frame input in S31, and the user's face Are output to the user video generation unit 75 together with coordinate information representing the image area.
  If the user has designated an icon facial image (S34: YES), the icon determination unit 743 selects an icon corresponding to the facial expression represented by the tag A (S36), and generates information on the selected icon as a user video. The data is output to the unit 75 (S37). Tag A is stored in a storage unit such as a memory in icon determination unit 743 in association with n3.
  When there is no expression change in the input camera image frame (S32: NO), the expression change detection unit 741 sends the camera image frame and the coordinate information representing the image area of the user's face to the expression change detection time reference unit 742. Output. The facial expression change detection time reference unit 742 compares the current time with n3 stored in the storage unit immediately before, and if the difference is within a predetermined time (t2 seconds) set in advance (S38: YES), the facial expression change It is determined that it is within t2 seconds from the occurrence of the change (within the display period of the facial expression video on the screen 1). Then, the facial expression change detection time reference unit 742 uses the information stored immediately before in S33 to determine whether or not the difference between the face position at the n3 time point and the current face position is within α pixels. Judgment is made (S39). Thus, it is determined whether or not the user's face in the camera video frame in which the expression change has occurred immediately before and the user's face in the current camera video frame are the same person.
  When the difference is within α pixels (S39: YES), the expression change detection time reference unit 742 determines that the person is the same person as the past face, and the icon determination unit 743 stores the camera video frame and the image area of the user's face. Output coordinate information. If the user has set a facial expression video of the camera video frame (S40: NO), the icon determination unit 743 outputs the camera video frame and coordinate information representing the image area of the user's face to the user video generation unit 75. (S41). If the user has set the facial expression image of the icon (S40: YES), the icon determination unit 743 selects an icon corresponding to the facial expression represented by the tag A stored in the storage unit in the immediately preceding S36 ( In step S42, the icon information is output to the user video generation unit 75 (step S43).
  On the other hand, when the difference between the current time and n3 exceeds t2 seconds specified in advance (S38: NO), the facial expression change detection time reference unit 742 displays t2 seconds (the facial expression video on the screen 1 is displayed) after the facial expression change occurs. It is determined that the display period has elapsed, and no output is performed to the icon determination unit 743 (S43). As a result, the facial expression image that has been displayed on the screen 1 until now disappears.
  When the difference exceeds the α pixel (S39: NO), the facial expression change detection time reference unit 742 determines that the user is different from the past face, and does not output to the icon determination unit 743 (S43). . As a result, the facial expression image that has been displayed on the screen 1 until now disappears.
  Note that the processing in FIG. 6 is repeatedly performed for each frame of the input camera video.
  In the embodiment shown in FIG. 6, the camera video frame to which the marker is attached is only the camera video frame corresponding to S35, but when the value of t2 is small, the marker is also attached to the camera video frame of S41. The effect processing may be performed. That is, effect processing may be performed while a facial expression video is displayed on the screen 1.
  FIG. 7 shows the processing of the user video generation unit 75 for each type of information input by the processing of FIGS. 5 and 6.
(A) When coordinate information representing a camera video frame with a marker and an image area of the user's face and upper body is input to the user video generation unit 75 (S17 and S20 in FIG. 5).
First, the user video extraction unit 751 extracts the user's face and upper body part from the camera video frame based on the coordinate information representing the image area of the user's face and upper body part, and generates a sign language video (S51). Then, the user video effect processing unit 752 performs effect processing (emphasis processing) on the generated sign language video so that the display of the sign language video is conspicuous (S52). Examples of the effects include adding a conspicuous color frame around the sign language image, blinking the frame, and enlarging the size of the sign language image from a preset normal size. In addition, if there is an effect designated in advance by the user using the remote controller 5 or the like, an effect based on the effect may be applied.
  Next, the user video size determination unit 753 adjusts the size of the sign language video to a size specified in advance by the user using the remote controller 5 or the like (S53). Next, the user video position coordinate determining unit 754 gives a position coordinate based on a position designated in advance by the user or based on an embodiment described later (S53), and outputs the sign language video and the position coordinate to the video synthesizing unit 76. (S54).
(B) When coordinate information representing the camera video frame and the image area of the user's face and upper body is input to the user video generation unit 75 (FIG. 5: S15, S21)
First, the user video extraction unit 751 extracts the user's face and upper body part from the camera video frame based on the coordinate information representing the image area of the user's face and upper body part, and generates a sign language video (S61). Then, when the user has designated in advance that the sign language video is to be applied with the remote controller 5 or the like, the user video effect processing unit 752 applies the specified effect (S62). If no effect is specified by the user, no effect processing is performed.
  Next, the user video size determination unit 753 adjusts the size of the sign language video to a size specified in advance by the user (S63). Next, the user video position coordinate determination unit 754 gives a position coordinate based on a position specified in advance by the user or based on an embodiment described later (S63), and outputs the sign language video and the position coordinate to the video composition unit. (S64).
(C) When the coordinate information representing the camera image frame with the marker and the image area of the user's face is input to the user image generation unit 75 (FIG. 6: S35).
First, the user video extraction unit 751 extracts a user's face from the camera video frame based on coordinate information representing the image area of the user's face, and generates a facial expression video (S71). Then, the user video effect processing unit 752 applies an effect that makes the facial expression video noticeable on the generated facial expression video. The effect is the same as S52 in (a).
  Next, the user video size determination unit 753 adjusts the size of the facial expression video to a size specified in advance by the user (S73). Next, the user video position coordinate determination unit 754 gives a position coordinate based on a position designated in advance by the user or based on an embodiment described later (S73), and outputs the facial expression video and the position coordinate to the video composition unit 76. (S74).
(D) When coordinate information representing the camera video frame and the image area of the user's face is input to the user video generation unit 75 (FIG. 6: S41)
First, the user video extraction unit 751 extracts the user's face portion from the camera video frame based on the coordinate information representing the image area of the user's face, and generates a facial expression video (S81). Then, when the user has previously specified that the effect is applied to the facial expression video, the user video effect processing unit 752 applies the specified effect (S82). If no effect is specified by the user, no effect processing is performed.
  Next, the user video size determination unit 753 adjusts the size of the facial expression video to a size specified in advance by the user (S83). Next, the user video position coordinate determination unit 754 gives a position coordinate based on a position specified in advance by the user or based on an embodiment described later (S83), and outputs the facial expression video and the position coordinate to the video composition unit 76. (S84).
(E) When icon information is input to the user video generation unit 75 as a facial expression video (FIG. 6: S37, S43)
First, when the user has specified in advance that an effect is to be applied to the icon, the user video effect processing unit 752 applies the specified effect (S91). Next, the user video size determination unit 753 adjusts the size of the icon video to a size specified in advance by the user (S92). Next, the user video position coordinate determination unit 754 determines a position coordinate based on a position designated in advance by the user or based on an embodiment described later (S92), and sends the icon information and the position coordinate to the video composition unit 76. Output (S93).
  Note that the process of FIG. 7 is repeatedly performed according to input information.
  In the processes (a) to (e) described above, when the user video position coordinate determination unit 754 determines the position coordinates of the sign language video and the facial expression video based on the position of the character information included in the content video, The information detection unit 755 detects the display position of the character information from the content video. For detection of character information, for example, a technique such as Reference 5 can be used. The character information is information such as subtitles and telops that express voices such as dialogue and narration in the content video by characters, excluding time display and program logo.
[Reference 5]: Takao Kadoma, Eiji Sawamura, Toru Tsuki, Katsuhiko Shirai, “Development of Automatic Formatting Technology in Practical System for Off-line Caption Production”, 2003 Video Media Society Winter Conference However, an embodiment for determining the position coordinates of the sign language image and the expression image will be described.
  Here, considering the ease of viewing for users with hearing impairments, sign language video and facial expression video should avoid overlapping text information superimposed on content video such as subtitles and telops as much as possible, I think it is desirable to be as close as possible to the character information so that the movement of the line of sight is small. Here, the sign language image is displayed at the corner closest to the character information among the four corners of the screen, and the facial expression image is displayed at the end of the character information. Since sign language images often have a large image size so that the contents of the sign language can be seen, the sign language images have four corners in consideration of the case where the sign language image does not fit at the end of the character information.
  When character information is detected at multiple locations on the screen, the sign language image is displayed at the corner near the detected character information at the bottom of the screen, and the facial expression image is the most on the screen. It shall be displayed at the end of the character information detected at the bottom.
  FIG. 8 and FIG. 9 show an image diagram of an example of the screen in which the position coordinates of the sign language image and the facial expression image are determined in such an embodiment.
  A screen 81 in FIG. 8 is an example of a screen that displays a sign language video when character information is detected in the center of the screen. The sign language video 811 and 812 of the user viewing the content video is closest to the text information 810. It is displayed in the lower left and right corners. The screen 82 in FIG. 8 is an example of a screen that displays a facial expression video when character information is detected at the center of the screen, and facial expressions 821 and 822 of the user viewing the content video are displayed at the end of the character information 820. Has been.
  The screen 83 in FIG. 8 is an example of a screen that displays a sign language image when character information is detected at the bottom of the screen. The sign language images 831 and 832 are displayed at the left and right corners closest to the character information 830. Has been. A screen 84 in FIG. 8 is an example of a screen that displays a facial expression video when character information is detected at the bottom of the screen, and facial expression videos 841 and 842 are displayed at the end of the character information 840.
  Screens 91 and 92 in FIG. 9 are screen examples that display a sign language image and a facial expression image, respectively, when character information is detected at the top of the screen. A screen 93 in FIG. 9 is an example of a screen in which a plurality of character information is detected, and a sign language image is displayed at a corner close to the lowermost character information. A screen 94 in FIG. This is an example of a screen that displays a facial expression video at the end of the lowermost character information when a plurality of characters are detected.
  Thus, the display positions of the sign language image and the expression image on the screen are determined so as not to overlap the character information. In addition to the above embodiment, a sign language image and a facial expression image may be displayed at an arbitrary position of the user.
  The video composition unit 76 then combines the content video output from the video content output device 8 with the composite video obtained by superimposing the sign language video or facial expression video output from the user video generation unit 75 on the position of the specified position coordinate. It is generated and sent to the screen 1 via the screen output interface 4. Thereby, the screen 1 displays a composite video in which a sign language video or a facial expression video of a user who views the content video is superimposed.
  In the present embodiment described above, the sign language video of the user is changed to the content video only while the user with hearing impairment is making the sign language and the content video being viewed by the user is switched to the TV commercial. By superimposing and displaying the content video, it is possible to communicate through the sign language video superimposed on the content video while considering the visibility of the content video. When a user with hearing impairments views the content video Communication can be made smoother.
  Specifically, while sign language is detected by detecting user sign language movement, the user sign language video is superimposed on the content video and displayed while the content video being viewed is switched to the TV commercial. Is regarded as a timing at which a user's conversation is likely to occur, and a sign language video is superimposed and displayed on a content video, whereby smooth communication can be realized and communication can be promoted.
  Further, in this embodiment, when a change in the user's facial expression is detected, the facial expression video of the user's face portion is displayed superimposed on the content video, so that it is not known that the screen faces each other. The change in facial expression can be informed to the other party, and communication can be made smoother.
  In this embodiment, when the display of the sign language image and the expression image is started, an effect that makes the images stand out is applied, and even if the user concentrates on the content image, the sign language image or the expression image is displayed. Display is started, and it is easy to notice that the other party has started sign language and that the other party's facial expression has changed. Accordingly, the user can effectively use the sign language video and the facial expression video, and can realize smoother communication.
  In the present embodiment, the display position of the sign language image or facial expression image on the screen is determined so as not to overlap character information such as telop or subtitle. This makes it possible to communicate smoothly over the video while taking into account the ease of viewing the character information included in the content video.
  In addition, this invention is not limited to the said embodiment, Many deformation | transformation are possible within the range of the summary.
1: Screen 2: Video camera 3: Camera input interface 4: Screen output interface 5: Remote control 6: User input interface 7: Control device 71: Camera image analysis unit 72: TV content determination unit 73: Sign language determination unit 74: Expression change Judgment unit 75: User video generation unit 76: Video composition unit 8: Video content output device

Claims (9)

  1. A viewer video display control device for controlling display of a viewer's video,
    A video analysis unit that analyzes a camera video input from a camera that shoots a viewer viewing a content video and detects the video of the viewer;
    A TV content determination unit for determining whether the content video is in a CM;
    A sign language detection unit that detects whether the viewer is sign language using the camera image;
    If determined by the TV content judgment unit is not in the CM, at a timing when the sign language detection unit detects that the sign language, only while the viewer is the sign language, using the image of the viewer e Bei a video synthesis unit for synthesizing by superimposing the sign language video on the content image, and
    The viewer sign display control device , wherein the sign language image is an image in which an operation of a sign language can be visually recognized when the viewer is performing sign language .
  2. The viewer image display control device according to claim 1 ,
    If determined by the TV content judgment unit is in the CM, the video synthesis section, a viewer image display characterized by synthesized by superimposing the sign language video using the video of the viewer to the content image Control device.
  3. The viewer image display control device according to claim 1 ,
    Further comprising a facial expression detector for detecting a facial expression change of the face of the viewer by using the camera image,
    When the judgment by the TV content judgment unit is not in the CM, and the viewer is not sign language, the video composition unit is the timing at which the facial expression change detection unit detects the facial expression change. viewer video display control apparatus of the expression video using the user image, characterized in that the synthesis is superimposed on the content image.
  4. The viewer image display control device according to claim 1 ,
    A facial expression change detector that detects facial changes in the viewer's face using the camera image ;
    A predetermined time after the sign language detection unit or the expression change detection unit detects an enhancement process for enhancing a sign language image using the viewer's image or an expression image using the viewer's image, An emphasis unit to perform ,
    The image combining unit viewer video display control apparatus characterized by synthesized by superimposing the sign language video or expression image the enhancement processing has been performed on the content image.
  5. A viewer video display control method for controlling display of a viewer's video performed by a computer,
    A video analysis step of analyzing a camera video input from a camera that shoots a viewer viewing the content video and detecting the video of the viewer;
    TV content determination step for determining whether or not the content video is in a CM;
    If it is determined in the TV content determination step that the CM is not being commercialized, the sign language video using the viewer's video is only displayed while the viewer is sign language at the timing when the viewer starts sign language. a video synthesis step of synthesizing superimposed on the content image, gastric row,
    The viewer sign display method , wherein the sign language image is an image in which a sign language operation can be visually recognized when the viewer is performing sign language .
  6. The viewer image display control method according to claim 5 ,
    The TV content if it is determined to be in CM and in decision, the video synthesis step the viewer image, which comprises synthesized by superimposing the sign language video using the video of the viewer to the content image Display control method.
  7. The viewer image display control method according to claim 5 ,
    If it is determined in the TV content determination step that the CM is not being commercialized and the viewer is not sign language, the video composition step is the timing when the viewer's facial expression changes. A viewer video display control method, comprising superimposing a facial expression video using a viewer's video on the content video and synthesizing it .
  8. The viewer image display control method according to claim 5 ,
    The sign language image using the detected viewer image is emphasized for a predetermined time from the start of sign language, or the facial expression image using the detected viewer image is changed for a predetermined time from the facial expression change. , Further perform an emphasis step to perform the emphasis process ,
    The viewer video display control method characterized in that the video synthesizing step superimposes the sign language video or facial expression video on which the processing of the emphasis step has been performed with the content video.
  9. A viewer video display control program for causing a computer to function as each unit included in the viewer video display control device according to any one of claims 1 to 4.
JP2012277959A 2012-12-20 2012-12-20 Viewer video display control device, viewer video display control method, and viewer video display control program Active JP6030945B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012277959A JP6030945B2 (en) 2012-12-20 2012-12-20 Viewer video display control device, viewer video display control method, and viewer video display control program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012277959A JP6030945B2 (en) 2012-12-20 2012-12-20 Viewer video display control device, viewer video display control method, and viewer video display control program

Publications (2)

Publication Number Publication Date
JP2014123818A JP2014123818A (en) 2014-07-03
JP6030945B2 true JP6030945B2 (en) 2016-11-24

Family

ID=51403992

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012277959A Active JP6030945B2 (en) 2012-12-20 2012-12-20 Viewer video display control device, viewer video display control method, and viewer video display control program

Country Status (1)

Country Link
JP (1) JP6030945B2 (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8430876B2 (en) 2009-08-27 2013-04-30 Tyco Healthcare Group Lp Vessel sealer and divider with knife lockout
US8540712B2 (en) 2010-08-20 2013-09-24 Covidien Lp Surgical instrument configured for use with interchangeable hand grips
US8628557B2 (en) 2011-07-11 2014-01-14 Covidien Lp Surgical forceps
US8679140B2 (en) 2012-05-30 2014-03-25 Covidien Lp Surgical clamping device with ratcheting grip lock
US8702749B2 (en) 2011-06-09 2014-04-22 Covidien Lp Lever latch assemblies for vessel sealer and divider
US8752264B2 (en) 2012-03-06 2014-06-17 Covidien Lp Surgical tissue sealer
US8756785B2 (en) 2011-09-29 2014-06-24 Covidien Lp Surgical instrument shafts and methods of manufacturing shafts for surgical instruments
US8764749B2 (en) 2008-04-22 2014-07-01 Covidien Jaw closure detection system
US9554844B2 (en) 2011-11-29 2017-01-31 Covidien Lp Open vessel sealing instrument and method of manufacturing the same
US9610121B2 (en) 2012-03-26 2017-04-04 Covidien Lp Light energy sealing, cutting and sensing surgical device
US9931159B2 (en) 2012-07-17 2018-04-03 Covidien Lp Gap control via overmold teeth and hard stops
US9974605B2 (en) 2012-01-25 2018-05-22 Covidien Lp Surgical instrument with resilient driving member and related methods of use
US10039587B2 (en) 2011-05-16 2018-08-07 Covidien Lp Thread-like knife for tissue cutting
US10188454B2 (en) 2009-09-28 2019-01-29 Covidien Lp System for manufacturing electrosurgical seal plates
US10245101B2 (en) 2010-06-02 2019-04-02 Covidien Lp Apparatus for performing an electrosurgical procedure
US10245103B2 (en) 2013-05-31 2019-04-02 Covidien Lp End effector assemblies and methods of manufacturing end effector assemblies for treating and/or cutting tissue
US10271897B2 (en) 2012-05-01 2019-04-30 Covidien Lp Surgical instrument with stamped double-flange jaws and actuation mechanism
US10271896B2 (en) 2013-09-16 2019-04-30 Covidien Lp Electrosurgical instrument with end-effector assembly including electrically-conductive, tissue-engaging surfaces and switchable bipolar electrodes
US10278770B2 (en) 2010-04-12 2019-05-07 Covidien Lp Surgical instrument with non-contact electrical coupling
US10299851B2 (en) 2011-10-20 2019-05-28 Covidien Lp Dissection scissors on surgical device
US10303641B2 (en) 2014-05-07 2019-05-28 Covidien Lp Authentication and information system for reusable surgical instruments
US10314639B2 (en) 2012-10-08 2019-06-11 Covidien Lp Jaw assemblies for electrosurgical instruments and methods of manufacturing jaw assemblies
US10327838B2 (en) 2010-06-02 2019-06-25 Covidien Lp Apparatus for performing an electrosurgical procedure
US10342605B2 (en) 2014-09-17 2019-07-09 Covidien Lp Method of forming a member of an end effector
US10499979B2 (en) 2014-04-17 2019-12-10 Covidien Lp Methods of manufacturing a pair of jaw members of an end-effector assembly for a surgical instrument
US10537331B2 (en) 2010-10-01 2020-01-21 Covidien Lp Surgical stapling device for performing circular anastomosis and surgical staples for use therewith
US10588686B2 (en) 2012-06-26 2020-03-17 Covidien Lp Surgical instruments with structures to provide access for cleaning
US10595932B2 (en) 2011-11-30 2020-03-24 Covidien Lp Electrosurgical instrument with a knife blade lockout mechanism
US10639040B2 (en) 2010-10-01 2020-05-05 Covidien Lp Surgical fastener applying apparatus
US10675046B2 (en) 2009-10-06 2020-06-09 Covidien Lp Jaw, blade and gap manufacturing for surgical instruments with small jaws
US10806508B2 (en) 2013-02-19 2020-10-20 Covidien Lp Method for manufacturing an electrode assembly configured for use with an electrosurgical instrument
US10813695B2 (en) 2017-01-27 2020-10-27 Covidien Lp Reflectors for optical-based vessel sealing
US10993733B2 (en) 2015-05-27 2021-05-04 Covidien Lp Surgical forceps
US11007000B2 (en) 2012-01-23 2021-05-18 Covidien Lp Partitioned surgical instrument

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5894055B2 (en) * 2012-10-18 2016-03-23 日本電信電話株式会社 VIDEO DATA CONTROL DEVICE, VIDEO DATA CONTROL METHOD, AND VIDEO DATA CONTROL PROGRAM

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI276357B (en) * 2002-09-17 2007-03-11 Ginganet Corp Image input apparatus for sign language talk, image input/output apparatus for sign language talk, and system for sign language translation
JP2004112511A (en) * 2002-09-19 2004-04-08 Fuji Xerox Co Ltd Display controller and method therefor
JP2005109669A (en) * 2003-09-29 2005-04-21 Casio Comput Co Ltd Display system, terminal, and terminal containing bag
JP4845581B2 (en) * 2006-05-01 2011-12-28 三菱電機株式会社 Television broadcast receiver with image and audio communication function
JP2010014487A (en) * 2008-07-02 2010-01-21 Sanyo Electric Co Ltd Navigation device
JP2010026021A (en) * 2008-07-16 2010-02-04 Sony Corp Display device and display method
JP2010239499A (en) * 2009-03-31 2010-10-21 Brother Ind Ltd Communication terminal unit, communication control unit, method of controlling communication of communication terminal unit, and communication control program
JP5346797B2 (en) * 2009-12-25 2013-11-20 株式会社アステム Sign language video synthesizing device, sign language video synthesizing method, sign language display position setting device, sign language display position setting method, and program
JP2012085009A (en) * 2010-10-07 2012-04-26 Sony Corp Information processor and information processing method
JP5894055B2 (en) * 2012-10-18 2016-03-23 日本電信電話株式会社 VIDEO DATA CONTROL DEVICE, VIDEO DATA CONTROL METHOD, AND VIDEO DATA CONTROL PROGRAM

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8764749B2 (en) 2008-04-22 2014-07-01 Covidien Jaw closure detection system
US10245104B2 (en) 2008-04-22 2019-04-02 Covidien Lp Jaw closure detection system
US8529566B2 (en) 2009-08-27 2013-09-10 Covidien Lp Vessel sealer and divider with knife lockout
US8430876B2 (en) 2009-08-27 2013-04-30 Tyco Healthcare Group Lp Vessel sealer and divider with knife lockout
US10188454B2 (en) 2009-09-28 2019-01-29 Covidien Lp System for manufacturing electrosurgical seal plates
US10675046B2 (en) 2009-10-06 2020-06-09 Covidien Lp Jaw, blade and gap manufacturing for surgical instruments with small jaws
US10278770B2 (en) 2010-04-12 2019-05-07 Covidien Lp Surgical instrument with non-contact electrical coupling
US10327838B2 (en) 2010-06-02 2019-06-25 Covidien Lp Apparatus for performing an electrosurgical procedure
US10245101B2 (en) 2010-06-02 2019-04-02 Covidien Lp Apparatus for performing an electrosurgical procedure
US8540712B2 (en) 2010-08-20 2013-09-24 Covidien Lp Surgical instrument configured for use with interchangeable hand grips
US10537331B2 (en) 2010-10-01 2020-01-21 Covidien Lp Surgical stapling device for performing circular anastomosis and surgical staples for use therewith
US10639040B2 (en) 2010-10-01 2020-05-05 Covidien Lp Surgical fastener applying apparatus
US10039587B2 (en) 2011-05-16 2018-08-07 Covidien Lp Thread-like knife for tissue cutting
US8702749B2 (en) 2011-06-09 2014-04-22 Covidien Lp Lever latch assemblies for vessel sealer and divider
US8628557B2 (en) 2011-07-11 2014-01-14 Covidien Lp Surgical forceps
US8756785B2 (en) 2011-09-29 2014-06-24 Covidien Lp Surgical instrument shafts and methods of manufacturing shafts for surgical instruments
US10993762B2 (en) 2011-10-20 2021-05-04 Covidien Lp Dissection scissors on surgical device
US10299851B2 (en) 2011-10-20 2019-05-28 Covidien Lp Dissection scissors on surgical device
US9554844B2 (en) 2011-11-29 2017-01-31 Covidien Lp Open vessel sealing instrument and method of manufacturing the same
US10595932B2 (en) 2011-11-30 2020-03-24 Covidien Lp Electrosurgical instrument with a knife blade lockout mechanism
US11007000B2 (en) 2012-01-23 2021-05-18 Covidien Lp Partitioned surgical instrument
US9974605B2 (en) 2012-01-25 2018-05-22 Covidien Lp Surgical instrument with resilient driving member and related methods of use
US10639095B2 (en) 2012-01-25 2020-05-05 Covidien Lp Surgical instrument with resilient driving member and related methods of use
US8752264B2 (en) 2012-03-06 2014-06-17 Covidien Lp Surgical tissue sealer
US9925008B2 (en) 2012-03-26 2018-03-27 Covidien Lp Light energy sealing, cutting and sensing surgical device
US10806514B2 (en) 2012-03-26 2020-10-20 Covidien Lp Light energy sealing, cutting and sensing surgical device
US10806515B2 (en) 2012-03-26 2020-10-20 Covidien Lp Light energy sealing, cutting, and sensing surgical device
US9610121B2 (en) 2012-03-26 2017-04-04 Covidien Lp Light energy sealing, cutting and sensing surgical device
US10271897B2 (en) 2012-05-01 2019-04-30 Covidien Lp Surgical instrument with stamped double-flange jaws and actuation mechanism
US8679140B2 (en) 2012-05-30 2014-03-25 Covidien Lp Surgical clamping device with ratcheting grip lock
US10588686B2 (en) 2012-06-26 2020-03-17 Covidien Lp Surgical instruments with structures to provide access for cleaning
US10702332B2 (en) 2012-07-17 2020-07-07 Covidien Lp Gap control via overmold teeth and hard stops
US9931159B2 (en) 2012-07-17 2018-04-03 Covidien Lp Gap control via overmold teeth and hard stops
US10314639B2 (en) 2012-10-08 2019-06-11 Covidien Lp Jaw assemblies for electrosurgical instruments and methods of manufacturing jaw assemblies
US10806508B2 (en) 2013-02-19 2020-10-20 Covidien Lp Method for manufacturing an electrode assembly configured for use with an electrosurgical instrument
US10245103B2 (en) 2013-05-31 2019-04-02 Covidien Lp End effector assemblies and methods of manufacturing end effector assemblies for treating and/or cutting tissue
US10271896B2 (en) 2013-09-16 2019-04-30 Covidien Lp Electrosurgical instrument with end-effector assembly including electrically-conductive, tissue-engaging surfaces and switchable bipolar electrodes
US10499979B2 (en) 2014-04-17 2019-12-10 Covidien Lp Methods of manufacturing a pair of jaw members of an end-effector assembly for a surgical instrument
US10820940B2 (en) 2014-04-17 2020-11-03 Covidien Lp Methods of manufacturing a pair of jaw members of an end-effector assembly for a surgical instrument
US10585839B2 (en) 2014-05-07 2020-03-10 Covidien Lp Authentication and information system for reusable surgical instruments
US10303641B2 (en) 2014-05-07 2019-05-28 Covidien Lp Authentication and information system for reusable surgical instruments
US10342605B2 (en) 2014-09-17 2019-07-09 Covidien Lp Method of forming a member of an end effector
US10993733B2 (en) 2015-05-27 2021-05-04 Covidien Lp Surgical forceps
US10813695B2 (en) 2017-01-27 2020-10-27 Covidien Lp Reflectors for optical-based vessel sealing

Also Published As

Publication number Publication date
JP2014123818A (en) 2014-07-03

Similar Documents

Publication Publication Date Title
US10482849B2 (en) Apparatus and method for compositing image in a portable terminal
WO2017157272A1 (en) Information processing method and terminal
US10334162B2 (en) Video processing apparatus for generating panoramic video and method thereof
US10565763B2 (en) Method and camera device for processing image
US9558591B2 (en) Method of providing augmented reality and terminal supporting the same
US9124766B2 (en) Video conference apparatus, method, and storage medium
KR100775176B1 (en) Thumbnail recording method for providing information of video data and terminal using the same
US8890923B2 (en) Generating and rendering synthesized views with multiple video streams in telepresence video conference sessions
EP3125524A1 (en) Mobile terminal and method for controlling the same
CN106210855B (en) object display method and device
US6961446B2 (en) Method and device for media editing
US8436941B2 (en) Information presenting device and information presenting method
JP3773670B2 (en) Information presenting method, information presenting apparatus, and recording medium
US7970257B2 (en) Image display method and electronic apparatus implementing the image display method
KR101464572B1 (en) A method of adapting video images to small screen sizes
JP6165846B2 (en) Selective enhancement of parts of the display based on eye tracking
TWI253860B (en) Method for generating a slide show of an image
EP3298509B1 (en) Prioritized display of visual content in computer presentations
US8416332B2 (en) Information processing apparatus, information processing method, and program
KR101009881B1 (en) Apparatus and method for zoom display of target area from reproducing image
US8384816B2 (en) Electronic apparatus, display control method, and program
US8421819B2 (en) Pillarboxing correction
US7808555B2 (en) Image display method and image display apparatus with zoom-in to face area of still image
JP4909854B2 (en) Electronic device and display processing method
JP2004072132A5 (en)

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20150227

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20151216

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20160202

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20160331

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20160823

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20160926

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20161018

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20161021

R150 Certificate of patent or registration of utility model

Ref document number: 6030945

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150