US11758259B2 - Electronic apparatus and controlling method thereof - Google Patents

Electronic apparatus and controlling method thereof Download PDF

Info

Publication number
US11758259B2
US11758259B2 US17/196,170 US202117196170A US11758259B2 US 11758259 B2 US11758259 B2 US 11758259B2 US 202117196170 A US202117196170 A US 202117196170A US 11758259 B2 US11758259 B2 US 11758259B2
Authority
US
United States
Prior art keywords
image
display area
area
captured image
electronic apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/196,170
Other languages
English (en)
Other versions
US20220070360A1 (en
Inventor
Dongho Lee
Jongho Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, DONGHO, KIM, JONGHO
Publication of US20220070360A1 publication Critical patent/US20220070360A1/en
Application granted granted Critical
Publication of US11758259B2 publication Critical patent/US11758259B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the disclosure relates to an electronic apparatus and a controlling method thereof.
  • the disclosure relates to an electronic apparatus for displaying an object included in a captured image and a controlling method thereof.
  • a user who mimics an exercise pose e.g., yoga, stretching, etc.
  • content provided by a television may not see their own pose.
  • the user may compare the correct pose in the content and the captured posture.
  • the content may be edited by an expert and a switching of a screen may be smooth, but editing may not be performed when the user is captured through the camera and thus, there may be a problem that the switching of the screen may not be smooth.
  • a field of view of the camera may be fixed and thus, the user has to manually operate the camera, making the user feel inconvenient.
  • a motion of the user may not be predicted in advance when tracking a simple motion of the user and thus, if the user abruptly changes a pose, there may be a problem in that some areas may not be displayed.
  • Embodiments of the disclosure address at least the above-mentioned problems and/or disadvantages and provide at least the advantages described below.
  • Embodiments of the disclosure provide an electronic apparatus to identify a display area of an image based on display areas of each of a plurality of captured images and a controlling method thereof.
  • the electronic apparatus includes: a camera and a processor configured to control the electronic apparatus to: track an object area including a user object from a captured image obtained through the camera and identify a display area from the captured image based on the tracked object area, and the processor is further configured to: identify a display area of a first captured image based on the object area identified from the first captured image, identify the display area of a second captured image based on an object area identified from the second captured image, and identify a display area of a third captured image based on a display area of the first captured image and a display area of the second captured image.
  • a controlling method of an electronic apparatus includes: tracking an object area including a user object from a captured image and identifying a display area from the captured image based on the tracked object area, identifying a display area of a first captured image based on the object area identified from the first captured image, identifying the display area of a second captured image based on an object area identified from the second captured image; and identifying a display area of a third captured image based on a display area of the first captured image and a display area of the second captured image.
  • FIG. 1 is a diagram illustrating an example electronic apparatus capturing a user according to various embodiments
  • FIG. 2 is a block diagram illustrating an example electronic apparatus according to various embodiments
  • FIG. 3 is a block diagram illustrating a example configuration of the electronic apparatus of FIG. 2 according to various embodiments
  • FIG. 4 is a flowchart illustrating an example operation of tracking a user in a captured image capturing a user according to various embodiments
  • FIG. 5 is a diagram illustrating an example operation of displaying an image capturing a user taking a first pose according to various embodiments
  • FIG. 6 is a diagram illustrating an example operation of displaying an image capturing a user taking a second pose according to various embodiments
  • FIG. 7 is a flowchart illustrating an example operation of identifying a size of a display area by comparing images of a user taking a first pose and a second pose to identify a size of a display area according to various embodiments;
  • FIG. 8 is a diagram illustrating an example operation of identifying a display area according to various embodiments.
  • FIG. 9 is a diagram illustrating an example operation of identifying a display area according to various embodiments.
  • FIG. 10 is a diagram illustrating an example operation of displaying a third captured image based on an identified display area according to various embodiments
  • FIG. 11 is a flowchart illustrating an example operation of identifying a display area based on a threshold time according to various embodiments
  • FIG. 12 is a diagram illustrating an example of changing of a display area over time by specifying the operation of FIG. 11 according to various embodiments;
  • FIG. 13 is a flowchart illustrating an example operation of identifying a display area based on a threshold number according to various embodiments
  • FIG. 14 is a diagram illustrating an example of changing of a display area over time by specifying the operation of FIG. 13 according to various embodiments;
  • FIG. 15 is a flowchart illustrating an example operation of identifying a display area based on a content received from an external server according to various embodiments
  • FIG. 16 is a flowchart illustrating an example operation of considering ratio information to identify a size of a display area according to various embodiments
  • FIG. 17 is a diagram illustrating an example change in a size of an image of taking a first pose and an image of taking a second pose according to various embodiments;
  • FIG. 18 is a diagram illustrating example ratio information between images of FIG. 17 according to various embodiments.
  • FIG. 19 is a diagram illustrating an example operation of displaying an image by applying ratio information according to various embodiments.
  • FIG. 20 is a flowchart illustrating an example method of controlling an electronic apparatus according to various embodiments.
  • first and “second,” may identify corresponding components, regardless of order and/or importance, and are used to distinguish a component from another without limiting the components.
  • one element e.g., a first element
  • another element e.g., a second element
  • a term such as “module,” “unit,” and “part,” is used to refer to an element that performs at least one function or operation and that may be implemented as hardware or software, or a combination of hardware and software. Except when each of a plurality of “modules,” “units,” “parts,” and the like must be realized in an individual hardware, the components may be integrated in at least one module or chip and be realized in at least one processor (not shown).
  • a “user” may refer to a person using an electronic apparatus or an apparatus using an electronic apparatus (e.g., artificial intelligence electronic apparatus).
  • FIG. 1 is a diagram illustrating an example electronic apparatus capturing a user according to various embodiments.
  • an electronic apparatus 100 may include a camera 110 .
  • the electronic apparatus 100 may capture a user 1000 located in front of the electronic apparatus 100 through the camera 110 .
  • the electronic apparatus 100 may additionally include a display 140 and may display the captured image on the display 140 .
  • the captured image may include a user object (image type) corresponding to the user 1000 .
  • the electronic apparatus 100 may not display all areas of the captured image on the display 140 , since the resolution information of the camera 110 and the resolution information of the display 140 may be different. Therefore, the electronic apparatus 100 may crop a partial area of the captured image obtained.
  • the electronic apparatus 100 may display only a cropped partial area of all areas of the captured image on the display 140 .
  • the cropped partial area may be a display area.
  • a user object included in the captured image may also be changed. Accordingly, the electronic apparatus 100 may change a display area of the captured image displayed on the display 140 based on the size of the changed user object. However, it may be difficult to change the display area in real time with respect to the image captured in real time. Thus, in some situations, as illustrated by way of illustrative example in FIG. 6 , there may be a problem in that some areas of the user object may not be displayed on the display 140 .
  • FIG. 2 is a block diagram illustrating an example electronic apparatus according to various embodiments.
  • the electronic apparatus 100 may include a camera 110 and a processor (e.g., including processing circuitry) 120 .
  • a processor e.g., including processing circuitry
  • the electronic apparatus 100 may include at least one of, for example, and without limitation, a smartphone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, a medical device, a camera, a wearable device, or the like.
  • a smartphone a tablet personal computer (PC)
  • PC personal computer
  • PDA personal digital assistant
  • PMP portable multimedia player
  • MP3 player MP3 player
  • a wearable device may include at least one of an accessory type (e.g., a watch, a ring, a bracelet, an ankle bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device (HMD)); a fabric or a garment-embedded type (e.g.: electronic cloth); skin-attached type (e.g., a skin pad or a tattoo); a bio-implantable circuit, or the like.
  • an accessory type e.g., a watch, a ring, a bracelet, an ankle bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device (HMD)
  • a fabric or a garment-embedded type e.g.: electronic cloth
  • skin-attached type e.g., a skin pad or a tattoo
  • bio-implantable circuit e.g., a bio-implantable circuit, or the like.
  • the electronic apparatus 100 may include at least one of, for example, and without limitation, a television, a digital video disk (DVD) player, an audio system, a refrigerator, air-conditioner, a cleaner, an oven, a microwave, a washing machine, an air purifier, a set top box, a home automation control panel, a security control panel, a media box (e.g., SAMSUNG HOMESYNCTM, APPLE TVTM, or GOOGLE TVTM), a game console (e.g., XBOXTM, PLAYSTATIONTM), an electronic dictionary, an electronic key, a camcorder, an electronic frame, or the like.
  • a television e.g., and without limitation, a television, a digital video disk (DVD) player, an audio system, a refrigerator, air-conditioner, a cleaner, an oven, a microwave, a washing machine, an air purifier, a set top box, a home automation control panel, a security control panel, a media box (e.g
  • the electronic apparatus 100 may include various devices including a display.
  • the electronic apparatus 100 may include, for example, and without limitation, an electronic board, TV, desktop PC, notebook PC, smartphone, tablet PC, server, or the like.
  • the above example is merely an example to describe an electronic apparatus and the various embodiments are not necessarily limited thereto.
  • the camera 110 is configured to generate a captured image by capturing a subject.
  • the captured image may include both a moving image and a still image.
  • the camera 110 may obtain an image of at least one external device and may be implemented as a camera, a lens, an infrared sensor, or the like.
  • the camera 110 may include a lens and an image sensor.
  • the type of lens may be a general purpose lens, a wide angle lens, a zoom lens, or the like, and may be determined according to the type, characteristics, usage environment, or the like, of the electronic apparatus 100 .
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the camera 110 may output the incident light as an image signal.
  • the camera 110 may include a lens, a pixel, and an analog-to-digital (AD) converter.
  • the lens may collect the light of the subject to form an optical image in a captured area, and the pixel may output the light input through the lens as an analog image signal.
  • the AD converter may convert an analog image signal into a digital image signal and output the converted signal.
  • the camera 110 may be arranged to capture a front direction of the electronic apparatus 100 , and may capture a user present on the front of the electronic apparatus 100 to generate a captured image.
  • the electronic apparatus 100 may include a plurality of cameras, and may combine images received through the plurality of cameras to identify a user's head posture. Using a plurality of cameras rather than using one camera, three-dimensional movement may be more precisely analyzed, so that it would be effective to identify the user's head posture.
  • the processor 120 may include various processing circuitry and control the overall operation of the electronic apparatus 100 .
  • the processor 120 may function to control overall operations of the electronic apparatus.
  • this may also refer to the processor controlling the electronic apparatus 100 to perform the described function.
  • the processor 120 may be implemented with, for example, and without limitation, a digital signal processor (DSP) for processing of a digital signal, a microprocessor, a time controller (TCON), or the like.
  • the processor 120 may include, for example, and without limitation, one or more among a central processor (CPU), a micro controller unit (MCU), a microprocessor (MPU), a controller, an application processor (AP), a communication processor (CP), an advanced reduced instruction set computing (RISC) machine (ARM) processor, a dedicated processor, or may be defined as a corresponding term.
  • CPU central processor
  • MCU micro controller unit
  • MPU microprocessor
  • AP application processor
  • CP communication processor
  • RISC advanced reduced instruction set computing
  • ARM advanced reduced instruction set computing
  • the processor 120 may be implemented in a system on chip (SoC) type or a large scale integration (LSI) type which a processing algorithm is built therein, application specific integrated circuit (ASIC), or in a field programmable gate array (FPGA) type.
  • SoC system on chip
  • LSI large scale integration
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the processor 120 may perform various functions by executing computer executable instructions stored in the memory.
  • the processor 120 may control the electronic apparatus 100 to track the object area including the user object in the captured image obtained through the camera 110 and identify the display area in the captured image based on the tracked object area, and the processor 120 may identify the display area of the first captured image based on the object area identified in the first captured image, identify the display area of the second captured image in the object area identified in the second captured image, and identify the display area of the third captured image based on the display area of the first captured image and the display area of the second captured image.
  • the processor 120 may receive a captured image capturing the front of the electronic apparatus 100 through the camera 110 .
  • the processor 120 may identify whether a user object corresponding to the user 1000 is included in the captured image received. If a user object is included in the captured image received, the processor 120 may identify an object area that includes a user object in the captured image received.
  • the object area may refer to an area in which a user object is displayed among all areas of the captured image.
  • the processor 120 may identify the object area based on the location information of the user object in the captured image, and may track the changed object area in real time. If the user 1000 changes or moves the pose, the object area may also be changed. The processor 120 may track the changed object area in real time.
  • the processor 120 may identify the display area based on the identified object area.
  • the display area may refer to an area to be displayed on a display among the entire area of the captured image.
  • the display may be a display 140 of the electronic apparatus 100 and may be a display of an external device according to an implementation example.
  • the processor 120 may obtain an image to be displayed by changing or cropping the size of the captured image. An image obtaining operation according to an embodiment will be described based on an image obtained through a cropping operation.
  • the processor 120 may remove some areas of the captured image to display the remaining areas or specify a partial area of the captured image to display only some areas.
  • the processor 120 may identify the user object in the first captured image and identify the object area based on the identified user object.
  • the processor 120 may identify a display area corresponding to the first captured image based on the identified object area.
  • the processor 120 may also identify the user object in the second captured image and identify the object area based on the identified user object.
  • the processor 120 may identify a display area corresponding to the second captured image based on the identified object area.
  • the first and second captured images may include images received in successive frames.
  • the second captured image may be received successively after the first captured image is received.
  • the first captured image may refer to an image capturing the user 1000 taking a first pose
  • the second captured image may refer to an image capturing the user 1000 taking a second pose after taking the first pose.
  • an object area of the first captured image may be described as a first object area
  • an object area of the second captured image may be described as a second object area
  • the display area of the first captured image may be described as a first display area
  • a display area of the second captured image may be described as a second display area.
  • the processor 120 may identify the display area of the third captured image based on the display area of the first captured image (the first display area) and the display area (second display area) of the second captured image.
  • the third captured image may refer to an image received after the first captured image and the second captured image are received.
  • the processor 120 may determine whether to set a display area of the third captured image based on the size of which display area.
  • the processor 120 may also identify the display area of the third captured image itself.
  • the display area of the third captured image may be preset based on the display area obtained from the first and second captured images. When the display area is changed in real time, the screen switching may not be smooth, so that the user may feel inconvenient.
  • the processor 120 may determine the display area of the current captured image based on the display area of the past captured image. It is not that the display area for the current captured image may not be newly identified, and the processor 120 may set the display area determined based on the past captured image to display the image, and then may newly identify the display area of the captured image received.
  • the display area may be divided into an expected display area and a set display area.
  • the expected display area may refer to a display area identified based on a user object included in the captured image received.
  • the set display area may refer to an area to be displayed on the display 140 . Therefore, the expected display area and the set display area may be different even in the image of the same point in time (or same time or same time point).
  • the expected display area and the set display area may be the same.
  • the image displayed on the display 140 may include all of the user objects. It is assumed that in the first captured image, the expected display area is a set display area.
  • an expected display area 623 and a set display area 523 may be different.
  • the expected display area 623 may be determined based on the current object area 622 , but the set display area 523 may be obtained based on a captured image 520 received in the past.
  • The may be different as described above because there may be a need to maintain the existing display area in order to smooth the screen switching. In addition, it may be necessary to address a problem when the display area is changed in real time, and the processing speed may take time, thereby causing a delay in a screen.
  • the processor 120 may identify the setting display area of the third captured image based on the expected display area of the first captured image and the expected display area of the second captured image.
  • an expected display area having a size larger than the size of the preset display area needs to be identified.
  • the processor 120 identifies a set display area based on a larger size of the expected display area, the set display area will always be increased without decrease, when the plurality of captured images are continuously received.
  • the processor 120 may change to the expected display area (the expected display area identified in the currently received captured image) rather than the set display area after a threshold time has elapsed. For example, referring to FIG. 12 , even though the expected display area is smaller in the captured image obtained at 11 seconds, the set display area may be displayed to be smaller.
  • the display area may be divided into the expected display area and the set display area, but may be described as the display area for convenience.
  • the processor 120 may identify a (set) display area of the third captured image based on the (expected) display area of the second captured image.
  • the electronic apparatus 100 addresses a problem that a portion of the body of the user 1000 is not displayed on the display 140 , when the user 1000 abruptly changes the pose. If the display area of the second captured image is smaller than the size of the display area of the first captured image, the processor 120 may not need to change the currently set display area. If the display area is maintained in a large state, the screen may not be cut in a point of view of the user and all the portions of the body of the user 1000 may also be displayed. Accordingly, the processor 120 may change the existing set display area, in a case where the size of the (expected) display area of the second captured image is greater than the size of the (expected) display area of the first captured image. The detailed description related thereto will be described later with reference to FIGS. 7 to 10 .
  • the processor 120 may identify the object area based on the height of the user object in the captured image and may identify the (expected) display area based on the identified height of the object area.
  • the object area may have the horizontal information and the vertical information.
  • An element which is most important to identify the display area may be vertical information.
  • the vertical information may be height information.
  • the processor 120 may determine the height of the object area based on the height of the user object and determine the height of the display area based on the determined height of the object area.
  • the processor 120 may determine the width of the display area based on the horizontal information. It may be assumed that the user takes a pose of moving left and right rapidly. The processor 120 may identify a range of left and right movement of the user, and may identify a display area based on the identified range.
  • the processor 120 may identify the (set) display area of the third captured image, and the third captured image may be an image captured after the plurality of second captured images.
  • the plurality of second captured images of the threshold number or more may include a consecutive case and also a non-consecutive case.
  • the second captured image may include a plurality of images.
  • the processor 120 may compare the (expected) display area of the first captured image with the (expected) display area of the plurality of second captured images to identify the (set) display area of the third captured image.
  • the processor 120 may take into account the plurality of second captured images to identify whether a predetermined event has occurred for greater than or equal to the threshold number. If the display area is identified based on all operations of the user 1000 , the amount of data throughput may increase and screen switching may slow to generate a delay. Accordingly, when a preset event occurs, the processor 120 may change the (set) display area.
  • the predetermined event may refer, for example, to the size of the (expected) display area of the second captured image being larger than the size of the (expected) display area of the first captured image.
  • the preset event may refer that the size of the (expected) display area of the second captured image is greater than the size of the (expected) display area of the first captured image and may be continuously obtained for greater than or equal to a threshold time.
  • the processor 120 may identify the display area of the third captured image when the (expected) display area larger than the size of the (expected) display area of the first captured image is identified in the second captured image, which is captured continuously and of which number is greater than or equal to a threshold number.
  • the preset event may refer, for example, to the size of the (expected) display area of the second captured image being greater than the size of the (expected) display area of the first captured image and is identified by greater than or equal to a threshold number.
  • the operation of identifying above the threshold number need not be continuous. A detailed description will be described in greater detail below with reference to FIGS. 13 and 14 .
  • the electronic apparatus 100 may further include the display 140 , and the processor 120 may control the display 140 to display a screen in which the content image received from an external server is included in a first area and the identified (set) display area is included in a second area.
  • the content may be implemented in a form of being received in real time or being stored in the memory 150 of the electronic apparatus 100 in advance. An operation of displaying content additionally will be described in greater detail below with reference to FIGS. 5 , 6 , and 10 .
  • the processor 120 may track a guide area including a guide object in a content image, and may identify the size information of the tracked guide area and the size information of the object area of the first captured image, and the (set) display area of the third captured image based on the size information of the object of the second captured image.
  • the guide object may refer to an object taking a specific pose in the content so that the user 1000 may mimic.
  • the guide object may be an object which captures an actual person, and may be a virtual three-dimensional (3D) character, or the like.
  • the guide area may refer to an area where a guide object is located in the entire area of the content image.
  • the guide area may be a guide area 512 and a guide area 612 of FIG. 17 .
  • the processor 120 may obtain the first ratio information based on the size of the guide area identified in the first content image and the size of the guide area identified in the second content image, obtain second ratio information based on the size of the object area of the first captured image and the size of the object area of the second captured image, and identify the display area of the third captured image based on the first ratio information and the second ratio information.
  • the processor 120 may identify the display area of the third captured image based on the ratio information which is relatively larger between the first ratio information and the second ratio information.
  • ratio information may be obtained based on the display area, and a detailed description thereof will be described in greater detail below with reference to FIG. 15 .
  • the display 120 may identify the (set) display area to be displayed through the display 140 in the captured image based on the resolution information and the tracked object area of the display 140 .
  • the resolution of the display 140 and the resolution of the captured image may be different. Accordingly, the processor 120 may not display the captured image as it is on the display 140 .
  • the processor 120 may crop the captured image according to the resolution information of the display 140 .
  • the processor 120 may identify the (set) display area to be displayed on the display 140 based on the resolution information of the display 140 and the size information of the object area, and may control the display 140 to display image information corresponding to the identified (set) display area.
  • the processor 120 may control the camera 110 based on the identified (set) display area. If the entire appearance of the user object is included in the captured image obtained by the camera 110 , the processor 120 may determine whether to zoom-in or zoom-out based on the identified (set) display area. The zoom-in or zoom-out operation may be determined based on the ratio information.
  • the processor 120 may control the camera 110 to zoom in.
  • the processor 120 may control the camera 110 to zoom out.
  • zooming operation of zoom-in and zoom-out is described, but panning or tilting may be applied in the same manner.
  • the electronic apparatus 100 may divide the object area and the display area based on the user object and analyze the past (expected) display area to determine a future (set) display area. Thus, since the electronic apparatus 100 does not need to process the (set) display area in real time, it is possible to provide a smooth screen switching service to the user, and a problem that a part of the body of the user is not displayed on the screen may be addressed.
  • the electronic apparatus 100 may automatically control the camera 110 based on the changed (set) display area, the user does not need to adjust an angle and magnification of a camera manually.
  • FIG. 3 is a block diagram illustrating an example configuration of the electronic apparatus 100 of FIG. 2 according to various embodiments.
  • the electronic apparatus 100 may include the camera 110 , the processor (e.g., including processing circuitry) 120 , a communication interface (e.g., including communication circuitry) 130 , a display 140 , a memory 150 , a user interface (e.g., including interface circuitry) 160 , an input and output interface (e.g., including input/output circuitry) 170 , a microphone 180 , and a speaker 190 .
  • the processor e.g., including processing circuitry
  • a communication interface e.g., including communication circuitry 130
  • a display 140 e.g., a display 140
  • a memory 150 e.g., a memory 150
  • a user interface e.g., including interface circuitry
  • an input and output interface e.g., including input/output circuitry
  • the communication interface 130 may include various communication circuitry and communicate with other external devices using various types of communication methods.
  • the communication interface 130 may include, for example, and without limitation, a Wi-Fi module, a Bluetooth module, an infrared communication module, a wireless communication module, or the like.
  • Each communication module may be implemented as at least one hardware chip.
  • the Wi-Fi module and the Bluetooth module perform communication using a Wi-Fi method and a Bluetooth method, respectively.
  • various connection information such as a service set identifier (SSID) and a session key may be transmitted and received first, and communication information may be transmitted after communication connection.
  • SSID service set identifier
  • the infrared ray communication module may perform communication according to infrared data association (IrDA) technology that transmits data wireless to a local area using infrared ray between visible rays and millimeter waves.
  • IrDA infrared data association
  • the wireless communication module may include at least one communication chip performing communication according to various communication standards such as Zigbee, 3 rd generation (3G), 3 rd generation partnership project (3GPP), long term evolution (LTE), LTE advanced (LTE-A), 4 th generation (4G), 5 th generation (5G), or the like, in addition to the communication methods as described above.
  • 3G 3 rd generation
  • 3GPP 3 rd generation partnership project
  • LTE long term evolution
  • LTE-A LTE advanced
  • 4G 4 th generation
  • 5G 5 th generation
  • the communication interface 130 may include at least one of a local area network (LAN) module, Ethernet module, or wired communication module performing communication using a pair cable, a coaxial cable, an optical cable, an ultra-wide band (UWB) module, or the like.
  • LAN local area network
  • Ethernet Ethernet
  • UWB ultra-wide band
  • the communication interface 130 may use the same communication module (for example, Wi-Fi module) for communicating with an external device such as a remote controller and an external server.
  • Wi-Fi module for example, Wi-Fi module
  • the communication interface 130 may use a different communication module (for example, a Wi-Fi module) to communicate with an external server and an external device such as a remote controller.
  • a Wi-Fi module for example, the communication interface 130 may use at least one of an Ethernet module or a Wi-Fi module to communicate with the external server, and may use a Bluetooth (BT) module to communicate with an external device such as a remote controller.
  • BT Bluetooth
  • the display 140 includes a display panel to output an image.
  • the display 140 may be implemented as various types of panels such as, for example, and without limitation, a liquid crystal display (LCD) panel, organic light emitting diodes (OLED) display panel, a plasma display panel (PDP), and the like.
  • a driving circuit which may be implemented in a type of an a-Si thin film transistor (TFT), a low temperature poly silicon (LTPS) TFT, an organic TFT (OTFT), and a backlight may be included.
  • the display 140 may be implemented as at least one of a touch screen coupled with a touch sensor, a flexible display, a three-dimensional (3D) display, or the like.
  • the display 140 may include not only a display panel to output an image but also a bezel that houses a display panel.
  • the bezel according to an embodiment may include a touch sensor (not illustrated) for sensing a user interaction.
  • the memory 150 may be implemented as an internal memory such as, for example, and without limitation, a read-only memory (ROM) (for example, electrically erasable programmable read-only memory (EEPROM)), a random-access memory (RAM) or a memory separate from the processor 120 .
  • the memory 150 may be implemented as at least one of a memory embedded within the electronic apparatus 100 or a memory detachable from the electronic apparatus 100 according to the usage of data storage.
  • the data for driving the electronic apparatus 100 may be stored in the memory embedded within the electronic apparatus 100
  • the data for upscaling of the electronic apparatus 100 may be stored in the memory detachable from the electronic apparatus 100 .
  • a memory embedded in the electronic apparatus 100 may be implemented as at least one of a volatile memory such as a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), or a non-volatile memory (for example, one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, a flash memory (for example, NAND flash or NOR flash), a hard disk drive (HDD) or a solid state drive (SSD).
  • a volatile memory such as a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), or a non-volatile memory (for example, one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically eras
  • the memory may be implemented as a memory card (for example, a compact flash (CF), secure digital (SD), micro secure digital (micro-SD), mini secure digital (mini-SD), extreme digital (xD), multi-media card (MMC), etc.), an external memory (for example, a universal serial bus (USB) memory) connectable to the USB port, or the like.
  • CF compact flash
  • SD secure digital
  • micro-SD micro secure digital
  • mini-SD mini secure digital
  • xD extreme digital
  • MMC multi-media card
  • USB universal serial bus
  • the user interface 160 may include various interface circuitry and be implemented with a device such as, for example, and without limitation, at least one of a button, a touch pad, a mouse, a keyboard, or a touch screen capable of performing the above-described display function and operation input function.
  • the button may be various types of buttons such as at least one of a mechanical button, a touch pad, a wheel, or the like, formed in an arbitrary area such as at least one of a front portion, a side portion, a back portion, or the like, of the outer surface of the main body of the electronic apparatus 100 .
  • the input and output interface 170 may include various input/output circuitry, such as, for example, and without limitation, at least one of a high-definition multimedia interface (HDMI), mobile high-definition link (MHL), universal serial bus (USB), display port (DP), Thunderbolt, video graphics array (VGA) port, RGB port, d-subminiature (D-SUB), digital visual interface (DVI), and the like.
  • HDMI high-definition multimedia interface
  • MHL mobile high-definition link
  • USB universal serial bus
  • DP display port
  • Thunderbolt Thunderbolt
  • VGA video graphics array
  • RGB d-subminiature
  • DVI digital visual interface
  • the input and output interface 170 may input or output at least one of an audio signal and a video signal.
  • the input and output interface 170 may include a port for inputting or outputting an audio signal or a video signal separately, or may be implemented as one port that inputs or outputs all the audio signals or video signals.
  • the electronic apparatus 100 may further include a microphone 180 .
  • the microphone 180 may include an element to receive a user voice or other sound and convert to audio data.
  • the microphone 180 may receive the user voice in an active state.
  • the microphone 180 may be integrally formed as an integral unit on at least one of an upper side, a front side direction, a side direction, or the like of the electronic apparatus 100 .
  • the microphone 180 may include various configurations such as a microphone for collecting user voice in an analog format, an amplifier circuit for amplifying the collected user voice, an audio-to-digital (A/D) conversion circuit for sampling the amplified user voice to convert into a digital signal, a filter circuitry for removing a noise element from the converted digital signal, or the like.
  • A/D audio-to-digital
  • the electronic apparatus 100 may include the speaker 190 .
  • the speaker 190 may include an element to output various audio data, various alarm sounds, a voice message, or the like, which are processed by the input and output interface 170 .
  • FIG. 4 is a flowchart illustrating an example operation of tracking a user in a captured image capturing a user according to various embodiments.
  • the electronic apparatus 100 may obtain a captured image through the camera 110 in operation S 405 .
  • the captured image may be an image capturing the front of the electronic apparatus 100 .
  • the user 1000 may be located in the front of the electronic apparatus 100 as illustrated in FIG. 1 , the captured image may be an image to capture the user.
  • the electronic apparatus 100 may identify a user object from the captured image obtained in operation S 410 .
  • the user object may refer, for example, to a human object and the electronic apparatus 100 may identify whether the user object is included in the captured image.
  • the electronic apparatus 100 may identify the object area including the identified user object in operation S 415 .
  • the electronic apparatus 100 may track the identified object area in operation S 420 .
  • the tracking may refer, for example, to tracking an object area to identify a change in the object area.
  • the electronic apparatus 100 may identify the display area based on the tracked object area in the captured image in operation S 425 .
  • the display area may refer to an area displayed on the display 140 of the electronic apparatus 100 .
  • the entire captured image may not be displayed on the display 140 of the electronic apparatus 100 , and a portion corresponding to the display area of the entire area of the captured image may be displayed on the display 140 .
  • FIG. 5 is a diagram illustrating an example operation of displaying an image capturing a user taking a first pose according to various embodiments.
  • the electronic apparatus 100 may capture the user 1000 through the camera 110 and may obtain a captured image 520 taking the first pose.
  • the electronic apparatus 100 may identify the user object 521 from the captured image 520 .
  • the electronic apparatus 100 may identify the object area 522 based on the identified user object 521 .
  • the electronic apparatus 100 may identify the display area 523 based on the identified object area 522 .
  • the electronic apparatus 100 may display the identified display area 523 on the display 140 of the electronic apparatus 100 .
  • the electronic apparatus 100 may divide the area into the first area 141 and the second area 142 .
  • the electronic apparatus 100 may display the content received from the external server on a first area 141 .
  • the content may include the guide object 511 .
  • the electronic apparatus 100 may display the identified display area 523 of the captured image 520 in the second area 142 .
  • the image displayed in the second area 142 may include the user object 521 .
  • the electronic apparatus 100 may display the user object 521 on the second area 142 at the same time with displaying the guide object 511 on the first area 141 and thus, the user 1000 may easily mimic the pose of the guide object 511 while directly watching the display 140 .
  • FIG. 6 is a diagram illustrating an example operation of displaying an image capturing a user taking a second pose according to various embodiments.
  • the electronic apparatus 100 may capture the user 1000 through the camera 110 , and may obtain the captured image 620 taking the second pose. It is assumed a situation where the user takes the second pose after taking the first pose.
  • the electronic apparatus 100 may identify the user object 621 from the captured image 620 .
  • the electronic apparatus 100 may identify an object area 622 based on the identified user object 621 .
  • the display area 523 may correspond to the display area of FIG. 5 . If the user 1000 suddenly changes from the first pose to the second pose, the display area 523 may be maintained without being changed in that the processing for identifying the display area 523 may take time. It is assumed that the user taking the second pose than the first pose is captured in more areas. An object area 622 corresponding to the second pose may be larger than the display area 523 . A part of an area 624 may be out of the display area 523 . The portion of the area 624 out of the display area 523 may not be displayed on the display 140 of the electronic apparatus 100 .
  • the electronic apparatus 100 may display the identified display area 523 on the display 140 of the electronic apparatus 100 .
  • the content may include a guide object 611 .
  • the electronic apparatus 100 may display the identified display area 523 of the captured image 620 in the second area 142 .
  • the image displayed on the second area 142 may include the user object 621 .
  • the image displayed on the second area 142 may not include a part of the captured images 620 .
  • the electronic apparatus 100 may identify the display area 623 of the captured image 620 based on the identified object area 622 from the captured image 620 .
  • FIG. 7 is a flowchart illustrating an example operation of identifying a size of a display area by comparing images of a user taking a first pose and a second pose to identify a size of a display area according to various embodiments.
  • the electronic apparatus 100 may identify an object area in the first captured image in operation S 705 .
  • the first captured image may refer, for example, to an image captured by the user 1000 taking a first pose.
  • the first captured image may correspond to the captured image 520 of FIG. 5 .
  • the electronic apparatus 100 may identify a display area of the first captured image based on the object area identified in the first captured image in operation S 710 .
  • the electronic apparatus 100 may obtain the size information of the display area of the identified first captured image.
  • the display area of the first captured image may correspond to the display area 523 of the captured image 520 in FIG. 5 .
  • the electronic apparatus 100 may identify an object area in the second captured image in operation S 715 .
  • the second captured image may refer to an image capturing the user 1000 taking the second pose.
  • the second captured image may correspond to the captured image 620 of FIG. 6 .
  • the electronic apparatus 100 may identify a display area of the second captured image based on the object area identified in the second captured image in operation S 720 .
  • the electronic apparatus 100 may obtain the size information of the display area of the identified second captured image.
  • the display area of the second captured image may correspond to a display area 823 of FIG. 8 and a display area 923 of FIG. 9 .
  • the electronic apparatus 100 may identify whether the size of the display area of the second captured image is greater than the size of the display area of the first captured image in operation S 725 .
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display area of the first captured image in operation S 730 . That is, the size of the display area of the third captured image may be the same as the size of the display area obtained from the first captured image.
  • the third captured image may refer, for example, to an image captured at a point of time after the second captured image.
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display area of the second captured image in operation S 735 .
  • the size of the display area of the third captured image may be the same as the size of the display area obtained from the second captured image.
  • the electronic apparatus 100 may maintain the larger size of the display area out of the size of the display area obtained from the first captured image and the size of the display area obtained from the second captured image. That is, a new captured image may be obtained over time, and the pose of the user 1000 may be varied in a new captured image. However, even if the pose of the user 1000 is varied, the electronic apparatus 100 may maintain the largest display area. The electronic apparatus 100 may display an image corresponding to the largest display area in the second area 142 in the plurality of captured images.
  • FIG. 8 is a diagram illustrating an example operation of identifying a display area according to various embodiments.
  • the captured image 820 may include a user object 821 .
  • the electronic apparatus 100 may identify an object area 822 based on a user object 821 .
  • the captured image 820 may be an image capturing the user 1000 taking a second pose.
  • the captured image 820 may include the user object 821 , and the electronic apparatus 100 may identify the object area 822 based on the user object 821 .
  • the electronic apparatus 100 may upscale the display area in all directions including left, right, up, and down in the object area 822 in identifying the display area of the captured image 820 .
  • the electronic apparatus 100 may extend all directions to extend the display area.
  • the display area 823 of the captured image 820 corresponding to the second pose may be an area extended in all directions up, down, left, and right compared with the display area 523 of the captured image 520 corresponding to the first pose (e.g., refer to FIG. 5 ).
  • FIG. 9 is a diagram illustrating an example operation of identifying a display area according to various embodiments.
  • a captured image 920 may include a user object 921 .
  • the electronic apparatus 100 may identify the object area 922 based on the user object 921 .
  • the captured image 920 may be an image capturing the user 1000 taking a second pose.
  • the captured image 920 may include the user object 921 , and the electronic apparatus 100 may identify the object area 922 based on the user object 921 .
  • the size of the display area of the captured image 920 is larger than the size of the display area 923 of the captured image 520 .
  • the display area may extend in all directions of up, down, left, and right in the object area 922 .
  • the electronic apparatus 100 may extend the upper direction to expand the display area.
  • the display area 923 of the captured image 920 corresponding to the second pose may be an area extended upward compared to the display area 523 of the captured image 520 corresponding to the first pose (e.g., refer to FIG. 5 ).
  • the display area may be extended in the upward direction.
  • the display area may be extended in the upward direction due to the movement of the user while the lower direction is fixed to the bottom.
  • FIG. 10 is a diagram illustrating an example operation of displaying a third captured image based on an identified display area according to various embodiments.
  • the electronic apparatus 100 may obtain a captured image 1020 by capturing the user 1000 taking the second pose through the camera 110 .
  • the captured image 1020 may include a user object 1021 .
  • the electronic apparatus 100 may identify an object area 1022 based on the user object 1021 .
  • a display area 1023 may be identified based on the identified object area 1022 .
  • a predetermined time may be elapsed while the user 1000 is taking the second pose.
  • the electronic apparatus 100 may display an image corresponding to the identified display area 1023 of the captured image 1020 in the second area 142 .
  • the user object 1021 may be displayed in the second area 142 .
  • a guide object 1011 may be displayed in the first area 141 .
  • FIG. 11 is a flowchart illustrating an example operation of identifying a display area based on a threshold time according to various embodiments.
  • the electronic apparatus 100 may identify the display area of the first captured image and the display area of the second captured image in operation S 1105 .
  • the operation S 1105 may correspond, for example, to S 705 to S 720 of FIG. 7 .
  • the electronic apparatus 100 may identify whether the size of the display area of the second captured image is greater than the size of the display area of the first captured image in operation S 1110 .
  • the operation S 1110 may correspond to S 725 of FIG. 7 .
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display area of the first captured image in operation S 1115 .
  • the size of the display area of the third captured image may be the same as the size of the display area obtained from the first captured image.
  • the third captured image may refer, for example, to an image captured at a point of time after the second captured image.
  • the electronic apparatus 100 may identify whether the display area of the second captured image is greater than or equal to a threshold time in operation S 1120 . If the display area of the second captured image is not identified for a threshold time or more (“N” in operation S 1120 ), the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display area of the first captured image in operation S 1115 .
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display area of the second captured image in operation S 1125 . That is, the size of the display area of the third captured image may be the same as the size of the display area obtained from the second captured image.
  • FIG. 12 is a diagram illustrating an example of changing of a display area over time by specifying the operation of FIG. 11 according to various embodiments.
  • a table 1205 illustrates a plurality of consecutive captured images.
  • the plurality of consecutive captured images may include a user object taking a first pose from one to four seconds, a user object taking a second pose from five seconds to eight seconds, and a user object taking a first pose from nine seconds to 13 seconds.
  • the captured image obtained at one second takes the first pose, which may correspond to the captured image 520 of FIG. 5 .
  • the captured image obtained at five seconds takes a second pose, and may correspond to the captured image 620 of FIG. 6 . Since the first pose is taken just before the captured image obtained at 5 seconds, the display area 523 may be maintained. Thus, a part of the area 624 of the captured image 620 may not be displayed on the display 140 . The user 1000 watching the display 140 may feel dizzy if the display area is changed directly as the pose is changed. Accordingly, the electronic apparatus 100 may delay change of the display area by a threshold time.
  • the electronic apparatus 100 may change to the display area 1023 corresponding to the second pose.
  • the captured image 1020 obtained at seven seconds may be displayed in the display area without omitting the object area 1022 based on the display area 1023 .
  • a captured image 1220 obtained at nine seconds may include a user object 1221 taking the first pose from the second pose.
  • the display area corresponding to the changed first pose may not be changed immediately. Accordingly, the object area in the captured image 1220 may be newly identified as 1222 , but the display area may be maintained as the display area 1023 corresponding to the captured image 1020 .
  • the electronic apparatus 100 may change to the display area 523 corresponding to the first pose.
  • the captured image 520 obtained at 11 seconds may be displayed on the display 140 based on the display area 523 , since if the display area 1023 is maintained as it is even after the threshold time elapses, the display area may be determined to be always be extended and a problem that the display area may not be matched with the content image displayed on the display 140 may occur.
  • FIG. 13 is a flowchart illustrating an example operation of identifying a display area based on a threshold number according to various embodiments.
  • operations S 1305 , S 1310 , and S 1315 may correspond, for example, to S 1105 , S 1110 , and S 1115 of FIG. 11 .
  • the overlapped description may not be repeated here.
  • the electronic apparatus 100 may identify whether the display area of the second captured image is greater than or equal to a threshold number in operation S 1320 .
  • the display area may be identified by a threshold number or more may refer, for example, to the display area being obtained for every predetermined frame based on an object area tracking in real time.
  • the electronic apparatus 100 may count the number of display areas obtained. Counting is performed to identify whether the operation of the user 1000 is repeated. For example, if the user takes a second pose only once, it is possible to consider the existing display area without changing the display area. However, if the second pose is taken by twice or more, the electronic apparatus 100 may change to a display area corresponding to the second pose. The detailed description will be described in greater detail below with reference to FIG. 14 .
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display are of the first captured image in operation S 1315 .
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the size of the display area of the second captured image in operation S 1325 .
  • the size of the display area of the third captured image may be the same as the size of the display area obtained from the second captured image.
  • FIG. 14 is a diagram illustrating an example of changing of a display area over time by specifying the operation of FIG. 13 according to various embodiments.
  • a table 1405 shows a plurality of consecutive captured images.
  • the plurality of consecutive captured images may include a user object taking a first pose from one second to two seconds, a user object taking a second pose from three seconds to four seconds, a user object taking a first pose from five seconds to six seconds, a user object taking a second pose up to seven seconds to eight seconds, and a user object taking a first pose from nine seconds to 13 seconds.
  • the user 1000 may take the first pose to the second pose repeatedly in a unit of two seconds.
  • the display area is changed only when the same pose is taken for a threshold time or more, and referring to an embodiment of FIG. 14 , the display area is changed only when the same pose is taken for the threshold number (threshold times). or more
  • the user object takes the first pose, and may correspond to the captured image 520 of FIG. 5 .
  • the image may correspond to the captured image 620 of FIG. 6 .
  • the display area identified by taking the second pose of the user object becomes larger in the captured image 620 , the identified display area 1023 is not identified by the threshold number (e.g., twice), so that the image by applying the existing display area 523 without applying the identified display area 1023 directly to the captured image may be transmitted to the display 140 .
  • the captured image obtained at second five may take the first pose again.
  • the electronic apparatus 100 may maintain the initial display area 523 as it is.
  • the user object may take a second pose.
  • the display area 1023 corresponding to the captured image obtained at seven seconds following the captured image obtained at three seconds may be identified by a threshold number (e.g., twice) or more. Accordingly, the electronic apparatus 100 may change the display area 523 to the display area 1023 to display the image on the display 140 .
  • the captured image obtained at 9 seconds may again take a first pose. Since the display area corresponding to the first pose is not larger than the display area corresponding to the second pose, the electronic apparatus 100 may display the image on the display 140 while maintaining the existing display area 1023 .
  • the electronic apparatus 100 may display the image on the display 140 while maintaining the existing display area 1023 , since the display area corresponding to the first pose is not larger than the display area corresponding to the second pose, although the captured image obtained at 11 seconds also maintains the first pose.
  • the embodiment may be implemented such that the threshold time mentioned in the embodiment of FIG. 13 is applicable simultaneously.
  • FIG. 15 is a flowchart illustrating an example operation of identifying a display area based on a content received from an external server according to various embodiments.
  • the electronic apparatus 100 may identify the display area of the first captured image and the display area of the second captured image in operation S 1505 .
  • the electronic apparatus 100 may receive content from an external server in operation S 1510 .
  • S 1510 is described as being performed after S 1505 , but S 1510 may be performed in advance.
  • the electronic apparatus 100 may identify the guide object from the received content in operation S 1515 .
  • the guide object has been described in greater detail above with reference to FIGS. 5 , 6 , and 10 .
  • the electronic apparatus 100 may track a guide area including the identified guide object in operation S 1520 .
  • the guide area may refer, for example, to an area including a guide object in the received content. Therefore, when the size of the guide object is changed, the size of the guide area may be changed.
  • the electronic apparatus 100 may identify the display area of the third captured image based on the tracked guide area, the display area of the first captured image and the display area of the second captured image.
  • the electronic apparatus 100 may obtain first ratio information based on a guide area obtained from a first content image corresponding to a time when a first captured image is received and a guide area obtained from a second content image corresponding to a point at which the second captured image is received.
  • the electronic apparatus 100 may obtain second ratio information based on a display area of the first captured image and a display area of the second captured image.
  • the electronic apparatus 100 may identify a larger ratio out of the first ratio information and the second ratio information, and may identify a display area of the third captured image based on the identified large ratio.
  • the ratio information may be obtained as an object area other than a display area of the captured image. This embodiment will be described in greater detail below with reference to FIGS. 16 , 17 , 18 and 19 . Similarly, referring to FIGS. 16 , 17 , 18 and 19 , the ratio information is described based on the object area, it may be equally applied to the display area.
  • the display area of the third captured image may be determined in additional consideration of the guide area included in the content as well as the captured images.
  • FIG. 16 is a flowchart illustrating an example operation of considering ratio information to identify a size of a display area according to various embodiments.
  • the electronic apparatus 100 may continuously receive a plurality of content images included in the content.
  • the electronic apparatus 100 may identify a guide area of a first content image and a guide area of a second content image in operation S 1605 .
  • the electronic apparatus 100 may obtain the first ratio information based on the size of the guide area of the first content image and the size of the guide area of the second content image in operation S 1610 .
  • the first content image and the second content image may refer to a continuous image according to a time sequence.
  • the electronic apparatus 100 may identify an object area of the first captured image and an object area of the second captured image in operation S 1615 .
  • the electronic apparatus 100 may obtain second ratio information based on the size of the object area of the first captured image and the size of the object area of the second captured image in operation S 1620 .
  • the first ratio information and the second ratio information may refer, for example, to a ratio of the size change of the compared image.
  • the first ratio information may refer, for example, to the size of the image of the first content: the size of the second content image
  • the second ratio information may refer, for example, to the size of the object area of the first captured image: the size of the object area of the second captured image.
  • the ratio information may be divided into vertical ratio information and horizontal ratio information.
  • the electronic apparatus 100 may identify whether the first ratio information is greater than the second ratio information in operation S 1625 . If the first ratio information is greater than the second ratio information (“Y” in operation S 1625 ), the electronic apparatus 100 may identify the size of the display area of the third captured image based on the first ratio information in operation S 1630 . The size of the display area obtained from the third captured image may be identified based on an area obtained by multiplying the object area identified in the third captured image by the first ratio information The electronic apparatus 100 may obtain the changed object area by, for example, multiplying the first ratio information corresponding to the content image by the object area of the first captured image, and may identify the display area of the third captured image based on the changed object area.
  • the electronic apparatus 100 may identify the size of the display area of the third captured image based on the second ratio information in operation S 1635 .
  • the electronic apparatus 100 may, for example, multiply the object area of the first captured image by the second ratio information to obtain a changed object area, and may identify a display area of the third captured image based on the changed object area.
  • the same result as S 735 in which the display area of the third captured image is identified with only the captured image without considering information about the content may be obtained.
  • the ratio information is applied to the object area, but eventually, the display area may be identified based on the changed object area. Therefore, the ratio information may be directly applied to the display area, not the object area.
  • the object area of the captured image is used to obtain the second ratio information.
  • the electronic apparatus 100 may be implemented in the form of using a display area of the captured image to obtain second ratio information.
  • FIG. 17 is a diagram illustrating an example of change in a size of an image of taking a first pose and an image of taking a second pose according to various embodiments.
  • the electronic apparatus 100 may obtain a first content image 510 including a guide object 511 and a first captured image 520 including a user object 521 .
  • the first content image 510 and the first captured image 520 may be simultaneously displayed on the display 140 .
  • horizontal information may be h 11 (e.g., 500 , unit omitted) and the vertical information may be w 11 (e.g., 100 , unit omitted).
  • the horizontal information may be h 21 (e.g., 500 , unit omitted) and the vertical information may be w 21 (e.g., 100 , unit omitted).
  • the electronic apparatus 100 may obtain the second content image 610 including a guide object 611 and the second captured image 620 including a user object 621 .
  • the second content image 610 and the second captured image 620 may be simultaneously displayed on the display 140 .
  • horizontal information may be h 12 (e.g., 750 , unit omitted) and the vertical information may be w 12 (e.g., 150 , unit omitted).
  • horizontal information may be h 22 (e.g., 650 , unit omitted) and the vertical information may be w 22 (e.g., 130 , unit omitted).
  • the electronic apparatus 100 may obtain the first ratio information based on the first content image 510 and the second content image 610 and may obtain the second ratio information based on the first captured image 520 and the second captured image 620 .
  • FIG. 18 is a diagram illustrating example ratio information between images of FIG. 17 according to various embodiments.
  • the electronic apparatus 100 may obtain horizontal ratio information 1805 and vertical ratio information 1815 from the first content image 510 and the second content image 610 .
  • the first information may include horizontal ratio information 1805 and the vertical ratio information 1815 .
  • the electronic apparatus 100 may obtain the horizontal ratio information 1810 and the vertical ratio information 1820 in the first and second captured images 520 and 620 .
  • the second ratio information may include horizontal rate information 1810 and vertical ratio information 1820 .
  • the horizontal ratio of the first content image 510 and the second content image 610 may be w 11 :w 12 (100:150) or 1:w 12 /w 11 (1:1.5). Since the horizontal information of the first captured image 520 is w 21 100 and the horizontal information of the second captured image 620 is w 22 130 , the horizontal ratio of the first captured image 520 and the second captured image 620 may be w 21 :w 22 (100:130) or 1:w 22 /w 21 (1:1.3).
  • the vertical ratio of the first content image 510 and the second content image 610 may be h 11 :h 12 (500:750) or 1:h 12 /h 11 (1:1.5).
  • the vertical information of the first captured image 520 is h 21 (500) and the vertical information of the second captured image 620 is h 22 (650)
  • the vertical ratio of the first captured image 520 and the second captured image 620 may be h 21 :h 22 (500:650) or 1:h 22 /h 21 (1:1.3).
  • FIG. 19 is a diagram illustrating an example operation of displaying an image by applying ratio information according to various embodiments.
  • the electronic apparatus 100 may identify larger ratio information between the first ratio information and the second ratio information based on operation S 1625 of FIG. 16 .
  • the first ratio information (1:1.5) may be larger ratio.
  • the electronic apparatus 100 may multiply the first ratio information to the object area 522 corresponding to the first captured image 520 .
  • the electronic apparatus 100 may multiply the horizontal information w 21 100 of the object area 522 corresponding to the first captured image 520 by the horizontal ratio information 1805 of the first ratio information to obtain the changed horizontal information w 32 (e.g., 150 , unit omitted).
  • the electronic apparatus 100 may multiply the vertical information h 21 100 of the object area 522 corresponding to the first captured image 520 by the vertical ratio information 1815 of the first ratio information to obtain the changed vertical information h 32 (e.g., 750 , unit omitted).
  • the electronic apparatus 100 may identify the changed object area based on the changed horizontal information w 32 150 and the changed newly information h 32 750 . Although an image 1920 in which the changed object area is identified takes a first pose, the electronic apparatus 100 may identify an object area 1922 including a user object 1921 that is greater than the size of the existing object area 522 including the user object 521 .
  • the electronic apparatus 100 identify the display area of the third captured image based on the extended object area 1922 .
  • the ratio information is described as being applied to the object area.
  • an operation of the electronic apparatus 100 may identify a display area of the third captured image.
  • the electronic apparatus 100 may be implemented in the form of directly applying ratio information to a display area rather than an object area.
  • the electronic apparatus 100 may identify the size of a new display area by applying the first ratio information directly to the display area 523 of the first captured image 520 of FIG. 5 .
  • the electronic apparatus 100 may display an image on the display 140 based on a size of the identified display area.
  • FIG. 20 is a flowchart illustrating an example method of controlling an electronic apparatus according to various embodiments.
  • a method of controlling the electronic apparatus 100 may include tracking an object area including a user object from a captured image and identifying a display area from the captured image based on the tracked object area in operation S 2005 .
  • the method may include identifying a display area of the first captured image based on the object area identified from the first captured image in operation S 2010 .
  • the method may include identifying the display area of the second captured image based on an object area identified from a second captured image in operation S 2015 .
  • the method may include identifying a display area of a third captured image based on a display area of the first captured image and a display area of the second captured image in operation S 2020 .
  • the identifying a display area of the third captured image in operation S 2020 may include, based on a size of a display area of the second captured image being greater than a size of a display area of the first captured image, identifying a display area of the third captured image based on a display area of the second captured image.
  • the identifying a display area of the first captured image in operation S 2010 and the identifying the display area of the second captured image in operation S 2015 may include identifying the object area from the captured image based on a height of the user object and identifying the display area based on the identified height of the object area.
  • the identifying a display area of a third captured image in operation S 2020 may include, based on a display area greater than a size of a display area of the first captured image being identified from a plurality of second captured images, which are captured after the first captured image and of which the number is greater than or equal to a threshold number, identifying a display area of the third captured image, and the third captured image may be captured after the plurality of second captured images.
  • the identifying a display area of the third captured image in operation S 2020 may include, based on a display area greater than a size of a display area of the first captured image being identified from a second captured image, which are captured consecutively, and of which the number is greater than or equal to a threshold number, identifying a display area of the third captured image.
  • the method may further include displaying a screen in which a content image received from an external server is included in a first area and the identified display area is included in a second area.
  • the method may further include tracking a guide area including a guide object in the content image, and the identifying a display area of the third captured image in operation S 2020 may include identifying a display area of the third captured image based on size information of the tracked guide area, size information of an object area of the first captured image, and size information of an object area of the second captured image.
  • the method may further include obtaining first ratio information based on a size of the guide area identified in the first content image and a size of the guide area identified in a second content image and obtaining second ratio information based on a size of the object area in the first captured image and a size of the object area in the second captured image, and the identifying a display area of the third captured image in operation S 2020 may include identifying a display area of the third captured image based on the first ratio information and the second ratio information.
  • the identifying a display area of the third captured image in operation S 2020 may include identifying a display area of the third captured image based on ratio information which is relatively greater out of the first ratio information and the second ratio information.
  • the identifying a display area of the first captured image in operation S 2010 , identifying a display area of the second captured image in operation S 2015 , and identifying a display area of a third captured image in operation S 2020 may include identifying a display area to be displayed through the display 140 from the captured image based on resolution information of the display 140 and the tracked object area.
  • the method of controlling the electronic apparatus 100 as FIG. 20 may be executed by the electronic apparatus 100 having the configuration as illustrated by way of non-limiting example in FIG. 2 or FIG. 3 , and by an electronic apparatus having other configurations.
  • Methods according to the embodiments as described above may be implemented as an application format installable in an existing electronic apparatus.
  • Methods according to the various example embodiments as described above may be implemented as software upgrade or hardware upgrade for an existing electronic apparatus.
  • Various example embodiments described above may be performed through an embedded server provided in an electronic apparatus, or an external server of at least one electronic apparatus and a display device.
  • Various example embodiments may be implemented in software, including instructions stored on non-transitory machine-readable storage media readable by a machine (e.g., a computer).
  • An apparatus may call instructions from the storage medium, and execute the called instruction, including an electronic apparatus, such as electronic apparatus A.
  • the processor may perform a function corresponding to the instructions directly or using other components under the control of the processor.
  • the instructions may include a code generated by a compiler or a code executable by an interpreter.
  • a machine-readable storage medium may be provided in the form of a non-transitory storage medium, which is tangible, and does not distinguish the case in which a data is semi-permanently stored in a storage medium from the case in which a data is temporarily stored in a storage medium.
  • the method according to the above-described embodiments may be included in a computer program product.
  • the computer program product may be traded as a product between a seller and a consumer.
  • the computer program product may be distributed online in the form of machine-readable storage media (e.g., compact disc read only memory (CD-ROM)) or through an application store (e.g., PLAYSTORETM) or distributed online directly.
  • machine-readable storage media e.g., compact disc read only memory (CD-ROM)
  • an application store e.g., PLAYSTORETM
  • at least a portion of the computer program product may be at least temporarily stored or temporarily generated in a server of the manufacturer, a server of the application store, or a machine-readable storage medium such as memory of a relay server.
  • the respective elements (e.g., module or program) mentioned above may include a single entity or a plurality of entities. At least one element or operation from of the corresponding elements mentioned above may be omitted, or at least one other element or operation may be added. Alternatively or additionally, some components (e.g., module or program) may be combined to form a single entity. In this case, the integrated entity may perform functions of at least one function of an element of each of the plurality of elements in the same manner as or in a similar manner to that performed by the corresponding element from of the plurality of elements before integration.
  • the module, a program module, or operations executed by other elements according to embodiments may be executed consecutively, in parallel, repeatedly, or heuristically, or at least some operations may be executed according to a different order, may be omitted, or the other operation may be added thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
US17/196,170 2020-08-31 2021-03-09 Electronic apparatus and controlling method thereof Active 2041-09-25 US11758259B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200110616A KR20220028951A (ko) 2020-08-31 2020-08-31 전자 장치 및 그 제어 방법
KR10-2020-0110616 2020-08-31

Publications (2)

Publication Number Publication Date
US20220070360A1 US20220070360A1 (en) 2022-03-03
US11758259B2 true US11758259B2 (en) 2023-09-12

Family

ID=80355320

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/196,170 Active 2041-09-25 US11758259B2 (en) 2020-08-31 2021-03-09 Electronic apparatus and controlling method thereof

Country Status (3)

Country Link
US (1) US11758259B2 (ko)
KR (1) KR20220028951A (ko)
WO (1) WO2022045509A1 (ko)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62270U (ko) 1985-06-14 1987-01-06
US20020136455A1 (en) * 2001-01-31 2002-09-26 I-Jong Lin System and method for robust foreground and background image data separation for location of objects in front of a controllable display within a camera view
US20050074185A1 (en) * 2003-10-07 2005-04-07 Jee-Young Jung Apparatus and method for controlling an auto-zooming operation of a mobile terminal
US20060267927A1 (en) * 2005-05-27 2006-11-30 Crenshaw James E User interface controller method and apparatus for a handheld electronic device
US20100091110A1 (en) 2008-10-10 2010-04-15 Gesturetek, Inc. Single camera tracker
US20100201837A1 (en) 2009-02-11 2010-08-12 Samsung Digital Imaging Co., Ltd. Photographing apparatus and method
JP2011059977A (ja) 2009-09-10 2011-03-24 Panasonic Corp 撮像装置
KR101414362B1 (ko) 2013-01-30 2014-07-02 한국과학기술원 영상인지 기반 공간 베젤 인터페이스 방법 및 장치
WO2015020703A1 (en) 2013-08-04 2015-02-12 Eyesmatch Ltd Devices, systems and methods of virtualizing a mirror
US20150063661A1 (en) * 2013-09-03 2015-03-05 Samsung Electronics Co., Ltd. Method and computer-readable recording medium for recognizing object using captured image
US20150356743A1 (en) 2012-07-17 2015-12-10 Nikon Corporation Photographic subject tracking device and camera
KR101687252B1 (ko) 2014-11-06 2016-12-16 장재윤 맞춤형 개인 트레이닝 관리 시스템 및 방법
JP6200270B2 (ja) 2013-10-11 2017-09-20 サターン ライセンシング エルエルシーSaturn Licensing LLC 情報処理装置及び情報処理方法、並びにコンピューター・プログラム
US9781350B2 (en) 2015-09-28 2017-10-03 Qualcomm Incorporated Systems and methods for performing automatic zoom
US20170302719A1 (en) * 2016-04-18 2017-10-19 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
US20170358119A1 (en) * 2016-06-08 2017-12-14 Qualcomm Incorporated Material-aware three-dimensional scanning
US20170366738A1 (en) 2016-06-17 2017-12-21 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US10187579B1 (en) 2017-06-30 2019-01-22 Polycom, Inc. People detection method for auto-framing and tracking in a video conference
KR102033643B1 (ko) 2017-07-10 2019-10-17 모젼스랩 (주) 대상체 인식비율에 따른 이용자 모션분석 시스템
KR102099316B1 (ko) 2018-03-28 2020-04-09 주식회사 스탠스 헬스케어를 위한 증강현실 디스플레이 장치 및 이를 이용한 헬스케어 시스템
KR102112236B1 (ko) 2019-09-26 2020-05-18 주식회사 홀로웍스 모션 디텍팅 기반의 가상 품새 에스티메이팅 시스템
US20200197746A1 (en) 2017-08-18 2020-06-25 Alyce Healthcare Inc. Method for providing posture guide and apparatus thereof

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62270U (ko) 1985-06-14 1987-01-06
US20020136455A1 (en) * 2001-01-31 2002-09-26 I-Jong Lin System and method for robust foreground and background image data separation for location of objects in front of a controllable display within a camera view
US20050074185A1 (en) * 2003-10-07 2005-04-07 Jee-Young Jung Apparatus and method for controlling an auto-zooming operation of a mobile terminal
US20060267927A1 (en) * 2005-05-27 2006-11-30 Crenshaw James E User interface controller method and apparatus for a handheld electronic device
US20100091110A1 (en) 2008-10-10 2010-04-15 Gesturetek, Inc. Single camera tracker
US20100201837A1 (en) 2009-02-11 2010-08-12 Samsung Digital Imaging Co., Ltd. Photographing apparatus and method
JP2011059977A (ja) 2009-09-10 2011-03-24 Panasonic Corp 撮像装置
US20150356743A1 (en) 2012-07-17 2015-12-10 Nikon Corporation Photographic subject tracking device and camera
KR101414362B1 (ko) 2013-01-30 2014-07-02 한국과학기술원 영상인지 기반 공간 베젤 인터페이스 방법 및 장치
WO2015020703A1 (en) 2013-08-04 2015-02-12 Eyesmatch Ltd Devices, systems and methods of virtualizing a mirror
KR102266361B1 (ko) 2013-08-04 2021-06-16 아이즈매치 리미티드 거울을 가상화하는 디바이스들, 시스템들 및 방법들
US20150063661A1 (en) * 2013-09-03 2015-03-05 Samsung Electronics Co., Ltd. Method and computer-readable recording medium for recognizing object using captured image
JP6200270B2 (ja) 2013-10-11 2017-09-20 サターン ライセンシング エルエルシーSaturn Licensing LLC 情報処理装置及び情報処理方法、並びにコンピューター・プログラム
KR101687252B1 (ko) 2014-11-06 2016-12-16 장재윤 맞춤형 개인 트레이닝 관리 시스템 및 방법
US9781350B2 (en) 2015-09-28 2017-10-03 Qualcomm Incorporated Systems and methods for performing automatic zoom
US20170302719A1 (en) * 2016-04-18 2017-10-19 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
US10313417B2 (en) 2016-04-18 2019-06-04 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
US20170358119A1 (en) * 2016-06-08 2017-12-14 Qualcomm Incorporated Material-aware three-dimensional scanning
US20170366738A1 (en) 2016-06-17 2017-12-21 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US10187579B1 (en) 2017-06-30 2019-01-22 Polycom, Inc. People detection method for auto-framing and tracking in a video conference
US10574899B2 (en) 2017-06-30 2020-02-25 Polycom, Inc. People detection method for auto-framing and tracking in a video conference
KR102033643B1 (ko) 2017-07-10 2019-10-17 모젼스랩 (주) 대상체 인식비율에 따른 이용자 모션분석 시스템
US20200197746A1 (en) 2017-08-18 2020-06-25 Alyce Healthcare Inc. Method for providing posture guide and apparatus thereof
KR102099316B1 (ko) 2018-03-28 2020-04-09 주식회사 스탠스 헬스케어를 위한 증강현실 디스플레이 장치 및 이를 이용한 헬스케어 시스템
KR102112236B1 (ko) 2019-09-26 2020-05-18 주식회사 홀로웍스 모션 디텍팅 기반의 가상 품새 에스티메이팅 시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report and Written Opinion dated Jun. 18, 2021 in corresponding International Application No. PCT/KR2021/002997.

Also Published As

Publication number Publication date
WO2022045509A1 (en) 2022-03-03
KR20220028951A (ko) 2022-03-08
US20220070360A1 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
US10462411B2 (en) Techniques for video analytics of captured video content
JP6165846B2 (ja) 目のトラッキングに基づくディスプレイの一部の選択的強調
US9319632B2 (en) Display apparatus and method for video calling thereof
JP5450739B2 (ja) 画像処理装置及び画像表示装置
US20170263056A1 (en) Method, apparatus and computer program for displaying an image
KR102317021B1 (ko) 디스플레이 장치 및 이의 영상 보정 방법
WO2013096165A1 (en) A method, apparatus, and system for energy efficiency and energy conservation including dynamic user interface based on viewing conditions
EP3065413B1 (en) Media streaming system and control method thereof
CN106201284B (zh) 用户界面同步系统、方法
US20220172440A1 (en) Extended field of view generation for split-rendering for virtual reality streaming
KR20210041757A (ko) 전자 장치 및 그 제어 방법
EP4154947A1 (en) Electronic apparatus and control method therefor
US11758259B2 (en) Electronic apparatus and controlling method thereof
CN112954212A (zh) 视频生成方法、装置及设备
CN111988525A (zh) 图像处理方法及相关装置
US11373340B2 (en) Display apparatus and controlling method thereof
KR20210049582A (ko) 전자 장치 및 그 제어 방법
KR20230137202A (ko) 디스플레이 장치 및 그 제어 방법
US20240069703A1 (en) Electronic apparatus and control method thereof
CN108304237A (zh) 一种移动终端的图像处理方法及装置、移动终端
US20240163392A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2023182667A1 (ko) 디스플레이 장치 및 그 제어 방법
US20230094993A1 (en) Electronic apparatus and controlling method thereof
US11418694B2 (en) Electronic apparatus and control method thereof
US11678048B2 (en) Image display apparatus, image display method, and electronic device

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, DONGHO;KIM, JONGHO;SIGNING DATES FROM 20210308 TO 20210309;REEL/FRAME:055535/0432

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE