KR20180081353A - Electronic device and operating method thereof - Google Patents

Electronic device and operating method thereof Download PDF

Info

Publication number
KR20180081353A
KR20180081353A KR1020170002488A KR20170002488A KR20180081353A KR 20180081353 A KR20180081353 A KR 20180081353A KR 1020170002488 A KR1020170002488 A KR 1020170002488A KR 20170002488 A KR20170002488 A KR 20170002488A KR 20180081353 A KR20180081353 A KR 20180081353A
Authority
KR
South Korea
Prior art keywords
selection tool
electronic device
various embodiments
processor
image
Prior art date
Application number
KR1020170002488A
Other languages
Korean (ko)
Inventor
김규원
손광훈
김선옥
Original Assignee
삼성전자주식회사
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사, 연세대학교 산학협력단 filed Critical 삼성전자주식회사
Priority to KR1020170002488A priority Critical patent/KR20180081353A/en
Publication of KR20180081353A publication Critical patent/KR20180081353A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Abstract

According to various embodiments of the present invention, disclosed are a method to select an object by dragging through an electronic device and a device thereof. According to various embodiments of the present invention, the electronic device includes: a display; a memory; and a processor connected functionally with the display and the memory. The processor is able to be configured to display an image through the display, to draw a selecting tool on the image in correspondence with user input, to determine a correction range based on a neighboring object of the selecting tool in the event of the completion of the user input, and to display the selecting tool by correcting the selecting tool as the nearest neighbor to the object based on the correction range. Various embodiments of the present invention are possible.

Description

ELECTRONIC DEVICE AND OPERATING METHOD THEREOF FIELD OF THE INVENTION [0001]

Various embodiments of the present invention disclose methods and apparatus for object selection by dragging in an electronic device.

2. Description of the Related Art Recently, with the development of digital technology, a mobile phone, a smart phone, a tablet PC, a notebook, a personal digital assistant (PDA), a wearable device, various types of electronic devices such as a digital camera or a personal computer are widely used.

In electronic devices, various image editing functions (e.g., object selection function) are provided according to the needs of image editing using a user's electronic device. For example, an electronic device may select an object for editing (e.g., cropping, copying, etc.) in response to a user's dragging input, in an image displayed through the display. According to one embodiment, the user can select only a specific object in the image (e.g., cut out an unnecessary portion other than an object, etc.), or perform editing for copying.

The electronic device is capable of drawing (drawing) a selected type of tool (e.g., a bounding box (e.g., a rectangular box, a freeform box, an elliptical box, etc.) And can be displayed on the image. The user can adjust the selection tool displayed by dragging to a desired size (e.g., resize the box to include only the desired object). According to one embodiment, a selection tool (e.g., a bounding box) may be provided as a rectangular box containing a specific number of adjustment handles (e.g., four, eight, etc.) surrounding the object, Rotate, move, cut, copy, and so on. The electronic device has eight adjustment handles (e.g., polygons (e.g., rhombus, circle, etc.) that can select (e.g., touch) a selection tool to easily modify the selection tool drawn in response to dragging on the image. And the user can adjust the selection tool by moving the adjustment handle.

A dragging method for drawing a selection tool in an electronic device may be an operation of pointing a virtual rectangular box that the user thinks. Therefore, the user must perform a try-and-error repeatedly in order to accurately point the virtual rectangle. Further, when the user is pointing for dragging, the user can hide the screen to be pointed by the input tool (e.g., a user finger, an electronic pen, etc.), so that it is difficult for the user to perform an accurate dragging operation. For example, in order to accurately draw a selection tool on an object desired by a user, it is necessary to perform trial and error repeatedly, and there is a problem that it is necessary to sophistication and a lot of effort to accurately draw a desired object.

Various embodiments disclose a method and apparatus for automatically correcting a selection tool (e.g., a bounding box) so that a dragging operation can be performed more simply and easily when a user edits an image in an electronic device.

In various embodiments, a method and apparatus are provided for automatically correcting a selection tool (e.g., a bounding box) drawn by a user in an electronic device to provide a nearest neighbor to the object. do.

In various embodiments, a method and apparatus for automatically correcting a selection tool drawn for object selection in an image displayed via an electronic device based on the object is disclosed.

In various embodiments, when editing an image in an electronic device, the sides of the selection tool (e.g., each line segment that forms a polygon) drawn according to the dragging of the user are adjusted (e.g., corrected) based on the object, A method and apparatus capable of causing an object to be selected are disclosed.

Various embodiments disclose an object selection method and apparatus capable of selecting a desired object more precisely for a user's lazy dragging by using an object property around the selection tool in the image, .

In various embodiments, a method and apparatus are disclosed that can reduce false positives and speed up computations when a user selects an object with dragging in an electronic device.

In various embodiments, the drawn selection tool (e.g., the bounding box) may be used for edge detection and image segmentation techniques, even when the user lazy dragges to the nearest of the desired object in the image. A method and apparatus for automatically correcting, based at least in part, are disclosed.

An electronic device in accordance with various embodiments of the present invention includes a display, a memory, and a processor operatively associated with the display and the memory, the processor displaying an image through the display, Determining a correction range based on a surrounding object of the selection tool upon completion of the user input and selecting the correction tool based on the correction range based on the correction range, (nearest neighbor) so as to be displayed.

A method of operating an electronic device according to various embodiments of the present invention includes the steps of displaying an image through a display, drawing a selection tool on the image in response to a user input, Determining a correction range based on a surrounding object of the selection tool; and correcting and displaying the selection tool so as to be a nearest neighbor to the object based on the correction range.

In order to solve the above problems, various embodiments of the present invention may include a computer-readable recording medium recording a program for causing the processor to execute the method.

According to the electronic device and its method of operation according to various embodiments, the electronic device may be configured to detect the drawn selection tool (e.g., the bounding box) by edge detection (e.g., edge detection and image segmentation techniques based on at least a part of the image segmentation technique. Thus, according to various embodiments, when a user selects a particular object from the displayed image, the user can automatically correct the selection tool by aligning the selection tool around the object, even if the user rags the object close to the desired object.

According to various embodiments, it is possible to more precisely select an object desired by a user with respect to dragging of a user's lag using an object characteristic of the surroundings of the selection tool in the image, thereby improving user's convenience. According to various embodiments, in the image editing operation of the user, the dragging operation for object selection can be performed more intuitively and simply, thereby improving the user's convenience. According to various embodiments, when a user selects an object with dragging in an electronic device, it is possible to reduce false positives and speed up the computation speed.

By an electronic device according to various embodiments, it can contribute to improving usability, convenience, or safety of the electronic device.

1 is a diagram illustrating a network environment including an electronic device according to various embodiments of the present invention.
2 is a block diagram of an electronic device in accordance with various embodiments of the present invention.
3 is a block diagram of a program module in accordance with various embodiments of the present invention.
4 is a diagram illustrating an example of a correction module associated with object selection in an electronic device according to various embodiments of the present invention.
5 is a diagram illustrating an operation for reducing a search area in an electronic device according to various embodiments of the present invention.
6 is a diagram illustrating an operation for learning patch components in an electronic device according to various embodiments of the present invention.
FIGS. 7A, 7B, 7C and 7D are diagrams illustrating an example in which selection tool correction is applied in an electronic device according to various embodiments of the present invention.
8 is a diagram illustrating an operation for correcting a selection tool in an electronic device according to various embodiments of the present invention.
9 is a diagram illustrating an example of calibrating and providing a selection tool in an electronic device according to various embodiments of the present invention.
10 is a flow diagram illustrating a method of operation of an electronic device in accordance with various embodiments of the present invention.
11 is a flow chart illustrating a method for determining a correction range of a selection tool in an electronic device according to various embodiments of the present invention.
12 is a flow chart illustrating a method of calibrating a selection tool in an electronic device in accordance with various embodiments of the present invention.
13 is a flow chart illustrating a method for selecting an object by correction of a selection tool in an electronic device according to various embodiments of the present invention.

Hereinafter, various embodiments of the present document will be described with reference to the accompanying drawings. It is to be understood that the embodiments and terminologies used herein are not intended to limit the invention to the particular embodiments described, but to include various modifications, equivalents, and / or alternatives of the embodiments. In connection with the description of the drawings, like reference numerals may be used for similar components. The singular expressions may include plural expressions unless the context clearly dictates otherwise. In this document, the expressions "A or B" or "at least one of A and / or B" and the like may include all possible combinations of the items listed together. Expressions such as "first," " second, "" first, " or "second, " But are not limited to these components. When it is mentioned that some (e.g., first) component is "(functionally or communicatively) connected" or "connected" to another (second) component, May be connected directly to the component, or may be connected through another component (e.g., a third component).

In this document, the term "configured to" as used herein is intended to encompass all types of information, including, but not limited to, "hardware" or "software" Quot ;, " modified to ", "made to be "," capable of ", or "designed to" interchangeably. In some situations, the expression "a device configured to" may mean that the device can "do " with other devices or components. For example, a processor configured (or configured) to perform the phrases "A, B, and C" may be implemented by executing one or more software programs stored in a memory device or a dedicated processor (e.g., an embedded processor) And a generic-purpose processor (e.g., a central processing unit (CPU) or an application processor (AP)) capable of performing the corresponding operations.

Electronic devices in accordance with various embodiments of the present document may be used in various applications such as, for example, smart phones, tablet PCs, mobile phones, videophones, electronic book readers, desktop PCs, laptop PCs, netbook computers, workstations, servers, personal digital assistants an assistant, a portable multimedia player (PMP), an MP3 player, a medical device, a camera, or a wearable device. A wearable device may be an accessory type (e.g., a watch, a ring, a bracelet, a bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device), a fabric or garment- A body attachment type (e.g., a skin pad or tattoo), or an implantable circuit. In some embodiments, the electronic device may be, for example, a television, a digital video disk (DVD) player, an audio, a refrigerator, an air conditioner, a vacuum cleaner, an oven, a microwave oven, a washing machine, , A security control panel, a media box (e.g., Samsung HomeSync TM , Apple TV TM or Google TV TM ), a game console (e.g. Xbox TM , PlayStation TM ), an electronic dictionary, an electronic key, a camcorder, . ≪ / RTI >

In another embodiment, the electronic device may be used in a variety of medical devices (e.g., various portable medical measurement devices such as a blood glucose meter, a heart rate meter, a blood pressure meter, or a temperature meter), magnetic resonance angiography (MRA) A navigation system, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), an automobile infotainment device, a ship electronics Equipment such as marine navigation systems, gyro compass, avionics, security devices, head units for vehicles, industrial or home robots, drone, automated teller machines (ATMs) , Point of sale (POS) of shops, or internet items of things (eg light bulbs, sensors, sprinkler devices, fire alarms, thermostats, street lights, toasters, exercise equipment, hot water tanks,It may comprise at least one of a boiler, etc.). According to some embodiments, the electronic device may be a piece of furniture, a building / structure or part of an automobile, an electronic board, an electronic signature receiving device, a projector, or various measuring devices (e.g., Gas, or radio wave measuring instruments, etc.). In various embodiments, the electronic device is flexible or may be a combination of two or more of the various devices described above. The electronic device according to the embodiment of the present document is not limited to the above-described devices. In this document, the term user may refer to a person using an electronic device or a device using an electronic device (e.g., an artificial intelligence electronic device).

1 is a diagram illustrating a network environment including an electronic device according to various embodiments.

Referring to Figure 1, in various embodiments, an electronic device 101 in a network environment 100 is described. The electronic device 101 may include a bus 110, a processor 120, a memory 130, an input / output interface 150, a display 160, and a communication interface 170. In some embodiments, the electronic device 101 may omit at least one of the components or additionally comprise other components.

The bus 110 may include circuitry to connect the components 110-170 to one another and to communicate communications (e.g., control messages or data) between the components.

Processor 120 may include one or more of a central processing unit (CPU), an application processor (AP), or a communication processor (CP). The processor 120 may perform computations or data processing related to, for example, control and / or communication of at least one other component of the electronic device 101. The processing (or control) operation of the processor 120 according to various embodiments is specifically described with reference to the following figures.

The memory 130 may include volatile memory and / or non-volatile memory. Memory 130 may store instructions or data related to at least one other component of electronic device 101, for example. According to one embodiment, the memory 130 may store software and / or programs 140. The program 140 may be stored in a memory 140 such as, for example, a kernel 141, a middleware 143, an application programming interface (API) 145, and / or an application program Or "application") 147 and the like. At least a portion of the kernel 141, middleware 143, or API 145 may be referred to as an operating system (OS).

The memory 130 may store one or more programs executed by the processor 120 and may perform functions for temporary storage of input / output data. The input / output data may include, for example, various learning / training data related to image editing, moving pictures, images (e.g., pictures), or files such as audio. According to various embodiments, the memory 130 is responsible for storing the acquired data, the data acquired in real time may be stored in a temporary storage device (e.g., a buffer), and the data determined to be stored may be stored It can be stored in a storage device. The memory 130 may include a computer-readable recording medium having recorded thereon a program for causing the processor 120 to execute a method according to various embodiments.

The kernel 141 may include system resources used to execute an operation or function implemented in other programs (e.g., middleware 143, API 145, or application program 147) (E.g., bus 110, processor 120, or memory 130). The kernel 141 also provides an interface to control or manage system resources by accessing individual components of the electronic device 101 in the middleware 143, API 145, or application program 147 .

The middleware 143 can perform an intermediary role such that the API 145 or the application program 147 can communicate with the kernel 141 to exchange data. In addition, the middleware 143 may process one or more task requests received from the application program 147 according to the priority order. For example, middleware 143 may use system resources (e.g., bus 110, processor 120, or memory 130, etc.) of electronic device 101 in at least one of application programs 147 Prioritize, and process the one or more task requests. The API 145 is an interface for the application 147 to control the functions provided by the kernel 141 or the middleware 143. The API 145 is used for controlling the functions of the application 147 such as file control, At least one interface or function (e.g., command) for processing, character control, or the like.

The input / output interface 150 may be configured to communicate commands or data entered from a user or an external device to another component (s) of the electronic device 101, ) To the user or an external device. For example, a wired / wireless headphone port, an external charger port, a wired / wireless data port, a memory card port, an audio input / output port, a video input / And the like may be included in the input / output interface 150.

The display 160 may be a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, an active OLED ), Micro-electromechanical systems (MEMS) displays, or electronic paper displays, and the like. Display 160 may display various content (e.g., text, images, video, icons, and / or symbols, etc.) to a user, for example. The display 160 may include a touchscreen and may include a touch screen such as, for example, a touch using an electronic pen or a portion of a user's body, gesture, proximity, or hovering ) Input.

Display 160 may, for example, show the user a visual output. The visual output may appear in the form of text, graphics, video, and combinations thereof. The display 160 can display (output) various information processed in the electronic device 101. [ For example, the display 160 may display a user interface (UI) or a graphical user interface (GUI) associated with use of the electronic device. According to various embodiments, the display 160 can display various user interfaces (e.g., UI or GUI) associated with the actions performed by the electronic device 101 (e.g., automatic correction of selection tools associated with object selection) have.

In various embodiments, the display 160 may include a curved display (or a bended display) that can be bent, bent, or otherwise undamaged through a thin, flexible substrate, such as a planar display or paper . The curved display can be fastened to the housing (or bezel, body) to maintain a curved shape. In various embodiments, the electronic device 101 may be embodied in a display device such as a curved display, such as a flexible display, which can flex and span freely. In various embodiments, the display 160 may replace a glass substrate wrapped with a liquid crystal in a liquid crystal display (LCD), a light emitting diode (LED), an organic light emitting diode (OLED), or an active OLED Flexibility can be provided. The display 160 may extend to at least one side (e.g., at least one of the left, right, top, and bottom sides) of the electronic device 101 such that the curved display has a radius of curvature (e.g., radius of curvature of 5 cm, 1 cm, 7.5 mm, 5 mm, 4 mm, etc.).

The communication interface 170 establishes communication between the electronic device 101 and an external device (e.g., the first external electronic device 102, the second external electronic device 104, or the server 106) . For example, communication interface 170 may be connected to network 162 via wireless or wired communication to communicate with an external device (e.g., second external electronic device 104 or server 106).

Wireless communication may be, for example, long term evolution (LTE), LTE Advance, code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunications system (UMTS), wireless broadband (WiBro) , Or global system for mobile communications (GSM), and the like. According to one embodiment, the wireless communication may include, for example, wireless fidelity, wireless gigabit alliance, Bluetooth, Bluetooth low energy, Zigbee, NFC field communication, a magnetic secure transmission, a radio frequency (RF), or a body area network (BAN). According to one embodiment, the wireless communication may comprise a GNSS. The GNSS may be, for example, a global positioning system (GPS), a global navigation satellite system (Glonass), a Beidou Navigation Satellite System (Beidou) or Galileo, the European global satellite-based navigation system. Hereinafter, in this document, " GPS " can be used interchangeably with " GNSS ".

The wired communication may include, for example, a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), a power line communication, or a plain old telephone service And may include at least one.

The network 162 may include at least one of a telecommunications network, e.g., a computer network (e.g., a local area network (LAN) or wide area network (WAN) .

Each of the first external electronic device 102 and the second external electronic device 104 may be the same or a different kind of device as the electronic device 101. [ According to various embodiments, all or a portion of the operations performed in the electronic device 101 may be performed in one or more external devices (e.g., electronic device 102, 104, or server 106). According to one embodiment, in the event that the electronic device 101 has to perform certain functions or services automatically or on demand, the electronic device 101 may be capable of executing the function or service itself, (E. G., Electronic device 102, 104, or server 106) at least some of the associated functionality. Other electronic devices (e. G., Electronic device 102, 104, or server 106) may execute the requested function or additional function and deliver the result to electronic device 101. [ The electronic device 101 can directly or additionally process the received result to provide the requested function or service. For this purpose, for example, cloud computing, distributed computing, or client-server computing techniques may be used.

The server 106 may be, for example, an integration server, a provider server (or a communications provider server), a content server, an internet server, a cloud server ), A web server, a secure server, or a certification server.

2 is a block diagram of an electronic device according to various embodiments.

The electronic device 201 may include all or part of the electronic device 101 shown in Fig. 1, for example. The electronic device 201 may include one or more processors (e.g., AP) 210, a communication module 220, a subscriber identification module 224, a memory 230, a sensor module 240, an input device 250, a display 260, an interface 270, an audio module 280, a camera module 291, a power management module 295, a battery 296, an indicator 297, and a motor 298. In various embodiments, the electronic device 201 may be implemented with more configurations than the configurations shown in FIG. 2, or with fewer configurations, as the configurations shown in FIG. 2 are not essential. For example, the electronic device 201 according to various embodiments may not include any of some components depending on its type. According to various embodiments, the configurations of the above-described electronic device 201 may be seated on the housing (or bezel, body) of the electronic device 201, or formed on the outside thereof.

The processor 210 may, for example, drive an operating system or application program to control a number of hardware or software components coupled to the processor 210, and may perform various data processing and operations. The processor 210 may be implemented with, for example, a system on chip (SoC). According to one embodiment, the processor 210 may further include a graphics processing unit (GPU) and / or an image signal processor (ISP). Processor 210 may include at least some of the components shown in FIG. 2 (e.g., cellular module 221). Processor 210 may load and process instructions or data received from at least one of the other components (e.g., non-volatile memory) into volatile memory and store the resulting data in non-volatile memory.

In various embodiments, the processor 210 may control the overall operation of the electronic device 201. In various embodiments, the processor 210 may include one or more processors. For example, processor 210 may include a communications processor (CP), an application processor (AP), an interface (e.g., general purpose input / output (GPIO) Can be integrated into an integrated circuit. According to one embodiment, the application processor may execute various software programs to perform various functions for the electronic device 201, and the communication processor may perform processing and control for voice communication and data communication Can be performed. The processor 210 may execute a specific software module (e.g., an instruction set) stored in the memory 230 to perform various specific functions corresponding to the module.

In various embodiments, the processor 210 may include an audio module 280, an interface 270, a display 260, a camera module 291, a communication module 220, a power management module 295, The operation can be controlled. According to various embodiments, the processor 210 may be electrically coupled to the display 260 and the memory 230 of the electronic device 201.

According to various embodiments, the processor 210 may automatically select a set of selection tools (e.g., a bounding box (e.g., a rectangular box, a freeform box, an elliptical box, etc.) To enable more accurate object selection. ≪ RTI ID = 0.0 > [0031] < / RTI > According to one embodiment, the processor 210 may detect a selection tool (e.g., a bounding box) drawn in the electronic device 201 when the user drags the image near the desired object in the image, based on at least a part of edge detection and image segmentation techniques. According to one embodiment, when the user selects an object with dragging in the electronic device 210, the processor 210 may use an object property around the drawn selection tool to select an object desired by the user for dragging the user's lazy And the like.

According to various embodiments, the processor 210 may include an operation of displaying an image through the display 260, an operation of drawing a selection tool on the image corresponding to the user input, An operation of determining a correction range based on the surrounding object of the object, and an operation of correcting and displaying the selection tool so as to be a nearest neighbor to the object based on the correction range.

According to various embodiments, the processor 210, when determining the correction range, may control the operation of separating the images into superpixel units, corresponding to sensing user input for the selection tool. In accordance with various embodiments, the processor 210 may include edge information (e.g., line features, patch features) around the drawn selection tool in a random forest, And determining the correction range based on the learning

The processing (or control) operation of the processor 210 in accordance with various embodiments is specifically described with reference to the following figures.

The communication module 220 may have the same or similar configuration as, for example, the communication interface 170 shown in Fig. The communication module 220 may include, for example, a cellular module 221, a WiFi module 223, a Bluetooth module 225, a GNSS module 227, an NFC module 228 and an RF module 229 have. Although not shown, the communication module 220 may further include, for example, a WiGig module (not shown). According to one embodiment, the WiFi module 223 and the WiGig module (not shown) may be integrated into one chip.

The cellular module 221 may provide, for example, voice calls, video calls, text services, or Internet services over a communication network. According to one embodiment, the cellular module 221 may perform identification and authentication of the electronic device 201 within the communication network using a subscriber identification module (e.g., SIM (subscriber identification module) card 224) . According to one embodiment, the cellular module 221 may perform at least some of the functions that the processor 210 may provide. According to one embodiment, the cellular module 221 may comprise a communications processor (CP). At least some (e.g., two or more) of the cellular module 221, the WiFi module 223, the Bluetooth module 225, the GNSS module 227, or the NFC module 228, according to some embodiments, integrated chip) or an IC package.

The RF module 229 can, for example, send and receive communication signals (e.g., RF signals). The RF module 229 may include, for example, a transceiver, a power amplifier module (PAM), a frequency filter, a low noise amplifier (LNA), or an antenna. According to another embodiment, at least one of the cellular module 221, the WiFi module 223, the Bluetooth module 225, the GNSS module 227, or the NFC module 228 transmits / receives an RF signal through a separate RF module .

The WiFi module 223 may represent a module for forming a wireless LAN link with, for example, a wireless Internet connection and an external device (e.g., another electronic device 102 or a server 106, etc.). The WiFi module 223 may be embedded or enclosed in the electronic device 201. WiFi, WiGig, WiBro, world interoperability for microwave access (WiMax), high speed downlink packet access (HSDPA), or mmWave (millimeter wave) can be used as the wireless Internet technology. The WiFi module 223 may be connected to an external device (e.g., another electronic device (e.g., a cellular phone, etc.)) directly connected to the electronic device 201 or connected via a network 104) or the like), various data of the electronic device 201 can be transmitted to the outside or can be received from the outside. The WiFi module 223 may be kept on at all times or may be turned on / off according to the setting of the electronic device or user input.

The Bluetooth module 225 and the NFC module 228 may represent, for example, a short range communication module for performing short range communication. Bluetooth, low power Bluetooth (BLE), radio frequency identification (RFID), infrared communication (IrDA), ultra wideband (UWB), Zigbee, or NFC may be used as a short distance communication technology. The short-range communication module is configured to interoperate with an external device (e.g., another electronic device 102 or the like) connected to the electronic device 201 via a network (e.g., a local area communication network) The device can be transmitted or received. The short range communication module (e.g., Bluetooth module 225 and NFC module 228) may be kept on all the time or may be turned on / off according to the setting of the electronic device 201 or the user input.

The subscriber identification module 224 may include, for example, a card or an embedded SIM containing a subscriber identity module, and may include unique identification information (e.g., ICCID) or subscriber information (e.g., IMSI (international mobile subscriber identity).

Memory 230 (e.g., memory 130) may include, for example, internal memory 232 or external memory 234. The built-in memory 232 may be implemented as, for example, volatile memory (e.g., dynamic random access memory (DRAM), synchronous random access memory (SRAM), or synchronous dynamic random access memory (SDRAM) (ROM), a flash ROM, a flash memory, a hard drive, or a solid state drive (SSD) such as a single time programmable read only memory (ROM), a programmable ROM (PROM), an erasable and programmable ROM (EPROM) , a solid state drive). The external memory 234 may be a flash drive, for example, a compact flash (CF), a secure digital (SD), a micro-SD, a mini-SD, an extreme digital (xD) Or a memory stick or the like. The external memory 234 may be functionally or physically connected to the electronic device 201 via various interfaces.

The memory 230 may be used by the processor 210 to cause the processor 210 to display an image through the display 260, draw a selection tool on the image in response to a user input, Data, or instructions related to determining a correction range based on a surrounding object of the selection tool and calibrating and displaying the selection tool to be a nearest neighbor to the object based on the correction range, Instructions can be stored.

In accordance with various embodiments, the memory 230 may be configured to allow the processor 210 to partition the images in superpixel units, corresponding to the sensing user input for the selection tool, Data, or instructions related to allowing information (e.g., line components, patch components) to be learned based on a random forest to determine a correction range.

The memory 230 may include an extended memory (e.g., external memory 234) or an internal memory (e.g., internal memory 232). The electronic device 201 may operate in association with web storage that performs the storage function of the memory 230 over the Internet.

The memory 230 may store one or more software (or software modules). For example, a software component may be an operating system software module, a communications software module, a graphics software module, a user interface software module, a moving picture experts group (MPEG) module, a camera software module, . ≪ / RTI > A module, which is a software component, can also be expressed as a set of instructions, so a module is sometimes referred to as an instruction set. Modules can also be expressed as programs. In various embodiments of the present invention, the memory 230 may include additional modules (instructions) in addition to the modules described above. Or may not use some modules (commands) as needed.

Operating system software modules may include various software components that control general system operations. Control of this general system operation may mean, for example, memory management and control, storage hardware (device) control and management, or power control and management. In addition, an operating system software module can also facilitate the communication between various hardware (devices) and software components (modules).

The communication software module may enable communication with other electronic devices, such as a wearable device, a smart phone, a computer, a server, or a handheld terminal, via the communication module 220 or interface 270. The communication software module may be configured with a protocol structure corresponding to the communication method.

The graphics software module may include various software components for providing and displaying graphics on the display 260. In various embodiments, the term graphic may be used to mean including text, a web page, an icon, a digital image, video, animation, etc. have.

The user interface software module may include various software components related to the user interface (UI). For example, how the state of the user interface is changed or under which conditions the change of the user interface state is made, and the like.

An MPEG module may include software components that enable processes and functions related to digital content (e.g., video, audio), such as creation, playback, distribution, and transmission of content.

The camera software module may include camera-related software components that enable camera-related processes and functions.

The application module may be a web browser including a rendering engine, an email, an instant message, word processing, keyboard emulation, an address book, A touch list, a widget, a digital rights management (DRM), an iris scan, a context cognition, a voice recognition, a positioning function a location determining function, a location based service, and the like. According to various embodiments of the present invention, the application module may include a correction module for automatic correction of the selection tool. For example, a search area processing module, a feature point processing module, a magnetic processing module, and the like may be included in the correction module.

The sensor module 240 may, for example, measure a physical quantity or sense the operating state of the electronic device 201 to convert the measured or sensed information into an electrical signal. The sensor module 240 includes a gesture sensor 240A, a gyro sensor 240B, a barometer sensor 240C, a magnetic sensor 240D, An acceleration sensor 240E, a grip sensor 240F, a proximity sensor 240G, a color sensor 240H (e.g., RGB (red, green, blue) At least one of a temperature sensor 240J, a temperature-humidity sensor 240J, an illuminance sensor 240K, or a UV (ultra violet) sensor 240M. One can be included. Additionally or alternatively, the sensor module 240 may include sensors such as, for example, an e-nose sensor, an electromyography (EMG) sensor, an electroencephalogram (EEG) sensor An electrocardiogram (ECG) sensor, an infrared (IR) sensor, an iris scan sensor, and / or a finger scan sensor. The sensor module 240 may further include a control circuit for controlling at least one or more sensors belonging to the sensor module 240. In some embodiments, the electronic device 201 further includes a processor configured to control the sensor module 240, either as part of the processor 210 or separately, so that while the processor 210 is in a sleep state, The sensor module 240 can be controlled.

The input device 250 may include, for example, a touch panel 252, a (digital) pen sensor 254, a key 256, or an ultrasonic input device 258. As the touch panel 252, for example, at least one of an electrostatic type, a pressure sensitive type, an infrared type, and an ultrasonic type can be used. Further, the touch panel 252 may further include a control circuit. The touch panel 252 may further include a tactile layer to provide a tactile response to the user. (Digital) pen sensor 254 may be part of, for example, a touch panel or may include a separate recognition sheet. Key 256 may include, for example, a physical button, an optical key, or a keypad. The ultrasonic input device 258 can sense the ultrasonic wave generated from the input tool through the microphone 288 and confirm the data corresponding to the ultrasonic wave detected. According to various embodiments, the input device 250 may include an electronic pen. According to various embodiments, the input device 250 may be implemented to receive a force touch.

Display 260 (e.g., display 160) may include panel 262, hologram device 264, projector 266, and / or control circuitry for controlling them.

The panel 262 may be embodied, for example, flexibly, transparently, or wearably. The panel 262 may comprise a touch panel 252 and one or more modules. According to one embodiment, the panel 262 may include a pressure sensor (or force sensor) capable of measuring the intensity of the pressure on the user's touch. The pressure sensor may be implemented integrally with the touch panel 252, or may be implemented by one or more sensors separate from the touch panel 252. The panel 262 may be seated in the display 260 and may sense user input contacting or approaching the surface of the display 260. The user input may comprise a touch input or proximity input based on at least one of a single-touch, a multi-touch, a hovering, or an air gesture. The panel 262 may receive user input to initiate an operation associated with use of the electronic device 201 in various embodiments and may generate an input signal in accordance with the user input. The panel 262 may be configured to convert a change in capacitance, such as a pressure applied to a particular area of the display 260 or a specific area of the display 260, to an electrical input signal. The panel 262 can detect the location and area where an input tool (e.g., a user finger, an electronic pen, etc.) is touched or approximated on the surface of the display 260. In addition, the panel 262 can be configured to detect pressure (e.g., force touch) at the time of touch according to the applied touch method.

The hologram device 264 can display a stereoscopic image in the air using interference of light. The projector 266 can display an image by projecting light onto a screen. The screen may be located, for example, inside or outside the electronic device 201.

The interface 270 may include, for example, an HDMI 272, a USB 274, an optical interface 276, or a D-sub (D-subminiature) 278. The interface 270 may, for example, be included in the communication interface 170 shown in FIG. Additionally or alternatively, the interface 270 may include, for example, a mobile high-definition link (MHL) interface, an SD card / multi-media card (MMC) interface, or an infrared data association have.

The interface 270 may receive data from another electronic device, or may receive power and communicate it to the respective configurations within the electronic device 201. [ The interface 270 may allow data within the electronic device 201 to be transmitted to other electronic devices. For example, a wired / wireless headphone port, an external charger port, a wired / wireless data port, a memory card port, an audio input / output port, a video input / Etc. may be included in the interface 270.

The audio module 280 can, for example, convert sound and electrical signals in both directions. At least some of the components of the audio module 280 may be included, for example, in the input / output interface 145 shown in FIG. The audio module 280 may process sound information input or output through, for example, a speaker 282, a receiver 284, an earphone 286, a microphone 288, or the like. The audio module 280 transmits the audio signal input from the processor 210 to an output device (e.g., a speaker 282, a receiver 284, or an earphone 286) And transmits an audio signal, such as a voice, received from the processor 210 to the processor 210. The audio module 280 converts audio / sound data into audible sound through an output device under the control of the processor 210, and converts the audio signal, such as voice, received from the input device into a digital signal, Lt; / RTI >

The speaker 282 or the receiver 284 may receive audio data from the communication module 220 or stored in the memory 230. [ The speaker 282 or the receiver 284 may output an acoustic signal related to various operations (functions) performed in the electronic device 201. [ The microphone 288 can receive an external acoustic signal and process it as electrical voice data. The microphone 288 may be implemented with various noise reduction algorithms for eliminating noise generated in receiving an external sound signal. The microphone 288 may be responsible for input of audio streaming such as voice commands or the like.

The camera module 291 is, for example, a device capable of capturing still images and moving images, and according to one embodiment, one or more image sensors (e.g., a front sensor or a rear sensor), a lens, an image signal processor (ISP) , Or flash (e.g., an LED or xenon lamp, etc.).

In accordance with various embodiments, the camera module 291 represents a configuration that supports the imaging function of the electronic device 201. The camera module 291 may take any subject under the control of the processor 210 and communicate the captured data (e.g., image) to the display 260 and the processor 210. According to various embodiments, the camera module 291 may include, for example, a first camera (e.g., a color camera) to acquire color information and depth information (e.g., (E.g., an infrared (IR) camera) for acquiring information (e.g., information). According to one embodiment, the first camera may be a front camera provided on the front surface of the electronic device 201. [ According to various embodiments, the front camera may be replaced by a second camera, and the first camera may not be provided at the front of the electronic device 201. [ According to various embodiments, the first camera may be placed together with the second camera on the front of the electronic device 201 together. According to one embodiment, the first camera may be a rear camera provided on the rear surface of the electronic device 201. [ According to one embodiment, the first camera may be configured to include both a front camera and a rear camera, which are provided on the front and back sides of the electronic device 201, respectively.

The camera module 291 may include an image sensor. The image sensor may be implemented as a CCD (charged coupled device) or a CMOS (complementary metal-oxide semiconductor).

The power management module 295 can, for example, manage the power of the electronic device 201. [ According to one embodiment, the power management module 295 may include a power management integrated circuit (PMIC), a charger IC, a battery, or a fuel gauge. The PMIC may have a wired and / or wireless charging scheme. The wireless charging scheme may include, for example, a magnetic resonance scheme, a magnetic induction scheme, or an electromagnetic wave scheme, and may further include an additional circuit for wireless charging, for example, a coil loop, a resonant circuit, have. The battery gauge can measure, for example, the remaining amount of the battery 296, the voltage during charging, the current, or the temperature. The battery 296 may include, for example, a rechargeable battery and / or a solar battery.

The indicator 297 may indicate a particular state of the electronic device 201 or a portion thereof (e.g., processor 210), e.g., a boot state, a message state, or a state of charge. The motor 298 can convert the electrical signal to mechanical vibration, and can generate vibration, haptic effects, and the like. Electronic device 201 is, for example, DMB Mobile TV-enabled devices capable of handling media data in accordance with standards such as (digital multimedia broadcasting), DVB (digital video broadcasting), or MediaFLO (mediaFlo TM) (for example, : GPU).

Each of the components described in this document may be composed of one or more components, and the name of the component may be changed according to the type of the electronic device. In various embodiments, an electronic device (e.g., electronic device 101, 201) may have some components omitted, additional components included, or some of the components being combined into one entity , The functions of the corresponding components before the combination can be performed in the same manner.

3 is a block diagram of a program module in accordance with various embodiments.

According to one embodiment, program module 310 (e.g., program 140) includes an operating system that controls resources associated with an electronic device (e.g., electronic device 101, 201) and / Application (e.g., application program 147). The operating system may include, for example, Android TM , iOS TM , Windows TM , Symbian TM , Tizen TM , or Bada TM .

3, the program module 310 includes a kernel 320 (e.g., a kernel 141), a middleware 330 (e.g., middleware 143), an API 360 (e.g., API 145) , And / or an application 370 (e.g., an application program 147). At least a portion of the program module 310 may be preloaded on the electronic device or may be downloaded from an external electronic device such as the electronic device 102 104 or the server 106,

The kernel 320 may include, for example, a system resource manager 321 and / or a device driver 323. The system resource manager 321 can perform control, allocation, or recovery of system resources. According to one embodiment, the system resource manager 321 may include a process manager, a memory manager, or a file system manager. The device driver 323 may include, for example, a display driver, a camera driver, a Bluetooth driver, a shared memory driver, a USB driver, a keypad driver, a WiFi driver, an audio driver, or an inter-process communication . The middleware 330 may provide various functions through the API 360, for example, to provide functions that are commonly needed by the application 370 or allow the application 370 to use limited system resources within the electronic device. Application 370 as shown in FIG.

According to one embodiment, middleware 330 includes a runtime library 335, an application manager 341, a window manager 342, a multimedia manager 343, A resource manager 344, a power manager 345, a database manager 346, a package manager 347, a connectivity manager 348, A notification manager 349, a location manager 350, a graphic manager 351, or a security manager 352, as shown in FIG. have.

The runtime library 335 may include, for example, a library module that the compiler uses to add new functionality via a programming language while the application 370 is executing. The runtime library 335 may perform input / output management, memory management, or arithmetic function processing.

The application manager 341 can manage the life cycle of the application 370, for example. The window manager 342 can manage graphical user interface (GUI) resources used in the screen. The multimedia manager 343 can recognize the format required for reproducing the media files and can perform encoding or decoding of the media file using a codec according to the format. The resource manager 344 can manage the source code of the application 370 or the space of the memory. The power manager 345 may, for example, manage the capacity or power of the battery and provide the power information necessary for operation of the electronic device. According to one embodiment, the power manager 345 may interface with a basic input / output system (BIOS). The database manager 346 may create, retrieve, or modify the database to be used in the application 370, for example. The package manager 347 can manage installation or update of an application distributed in the form of a package file.

The connectivity manager 348 may, for example, manage the wireless connection. The notification manager 349 may provide the user with an event such as, for example, an arrival message, an appointment, a proximity notification, and the like. The location manager 350 can manage the location information of the electronic device, for example. The graphic manager 351 can manage, for example, a graphic effect to be provided to the user or a user interface related thereto. The security manager 352 may provide, for example, system security or user authentication.

According to one embodiment, the middleware 330 may include a telephony manager for managing the voice or video call function of the electronic device, or a middleware module capable of forming a combination of the functions of the above-described components. According to one embodiment, the middleware 330 may provide a module specialized for each type of operating system. Middleware 330 may dynamically delete some existing components or add new ones.

The API 360 may be provided in a different configuration depending on the operating system, for example, as a set of API programming functions. For example, for Android or iOS, you can provide a single API set for each platform, and for Tizen, you can provide two or more API sets for each platform.

The application 370 may include a home 371, a dialer 372, an SMS / MMS 373, an instant message 374, a browser 375, a camera 376, an alarm 377, A contact 378, a voice dial 379, an email 380, a calendar 381, a media player 382, an album 383, a watch 384, and the like. According to various embodiments, the application 370 includes an application that provides healthcare (e.g., measures momentum or blood glucose) or environmental information (e.g., pressure, humidity, or temperature information) .

According to one embodiment, the application 370 may include an information exchange application that can support the exchange of information between the electronic device 201 and an external electronic device. The information exchange application may include, for example, a notification relay application for communicating specific information to an external electronic device, or a device management application for managing an external electronic device. For example, the notification relay application may transmit notification information generated in another application of the electronic device to the external electronic device, or receive notification information from the external electronic device and provide the notification information to the user. The device management application may, for example, turn-on / turn-off the function of an external electronic device in communication with the electronic device (e.g., the external electronic device itself ) Or adjusting the brightness (or resolution) of the display), or installing, deleting, or updating an application running on an external electronic device. According to one embodiment, the application 370 may include an application (e.g., a healthcare application of a mobile medical device) designated according to the attributes of the external electronic device.

According to one embodiment, the application 370 may include an application received from an external electronic device. At least some of the program modules 310 may be implemented (e.g., executed) in software, firmware, hardware (e.g., processor 210), or a combination of at least two of the same, Program, routine, instruction set or process.

As used herein, the term "module" includes a unit of hardware, software or firmware and may be, for example, a logic, a logic block, a component, ) And the like can be used interchangeably. A "module" may be an integrally constructed component or a minimum unit or part thereof that performs one or more functions. "Module" may be implemented either mechanically or electronically, for example, by application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs) And may include programmable logic devices. At least some of the devices (e.g., modules or functions thereof) or methods (e.g., operations) according to various embodiments may be stored in a computer readable storage medium (e.g., memory 130 of FIG. (E.g., memory 230 of FIG. 2). When the instruction is executed by a processor (e.g., processor 120 of FIG. 1 or processor 210 of FIG. 2), the processor may perform a function corresponding to the instruction.

The computer readable recording medium may be a hard disk, a floppy disk, magnetic media (e.g., magnetic tape), optical recording media (e.g., compact disc read only memory (CD-ROM) a digital versatile disc, magneto-optical media (e.g., a floptical disk), internal memory, etc. The instructions may be executed by code or an interpreter generated by the compiler The module or program module according to various embodiments may include at least one or more of the elements described above, some of which may be omitted, or may further include other elements. Operations performed by a module, program module or other component, according to the examples, may be performed sequentially, in parallel, repetitively, or heuristically, May be executed in a different order, omitted, or other actions may be added.

According to various embodiments, the recording medium may include a computer-readable recording medium having recorded thereon a program for causing the processor 120, 210 to execute the various methods described below.

In various embodiments, the electronic device may include any device that uses one or more of a variety of processors, such as an AP, a CP, a GPU, and a CPU. For example, an electronic device according to various embodiments may include an information communication device, a multimedia device, a wearable device, an IoT device, or various other devices corresponding to those devices.

4 is a diagram illustrating an example of a correction module associated with object selection in an electronic device according to various embodiments of the present invention.

As shown in Fig. 4, Fig. 4 illustrates a selection tool (e.g., an electronic device 101, 201 in Fig. 1 or 2) drawn according to a user's dragging on an image displayed in an electronic device (E.g., a bounding box) based on an object based on an object. In various embodiments, the correction module 400 may be included as a hardware module in a processor (e.g., processors 120 and 210 of FIG. 1 or 2, hereinafter referred to as processor 210), or may be included as a software module.

4, the correction module 400 for automatic correction of the selection tool may include a search region processing module 410, a feature point processing module 420, a magnetic ) Processing module 450 and the like. According to various embodiments, the feature point processing module 420 may include a first feature point processing module 430 and a second feature point processing module 440.

The search area processing module 410 may search the image displayed through the display (e.g., displays 160 and 260 of Figure 1 or Figure 2), if the selection tool is drawn by the user, ). ≪ / RTI > For example, the search area processing module 410 may reduce the unnecessary search area prior to determining a substantial object belonging to the selection tool in the image, for example, a random forest inference to be described later, You can handle actions that reduce false positives. An example of this is shown in Fig.

5 is a diagram illustrating an operation for reducing a search area in an electronic device according to various embodiments of the present invention.

According to various embodiments, when an initial selection tool 520 (e.g., a bounding box) is drawn by a user in an image that includes a particular object 510, it may be newly corrected in proportion to the size of the initial selection tool 520 The size of the selection tool 520 may be determined. For example, FIG. 5 may illustrate an example of when a user selects a particular object 510 (e.g., an airplane) from an image.

According to various embodiments, the image 500 may be rotated clockwise 270 degrees (or 90 degrees counterclockwise) to reduce the search area for random forest inference. For example, as shown in the example of FIG. 5 (a), the search area processing module 410 may be configured to search the selection tool 520 when the user has drawn the selection tool 520 proximate to the object 510 (Or area) R through a 270 degree rotation. For example, with the right border of the selected object 510 (e.g., an airplane) selected in the image 500, the size of one half of the width of the selection tool 520 (e.g., bounding box) search range can be determined.

As shown in the example of FIG. 5 (b), the search area processing module 410 may remove compression artifacts in the image 500. According to one embodiment, the search area processing module 410 may apply an edge-preserving smoothing filter to remove compression artifacts. According to one embodiment, the search area processing module 410 may apply Fast Global Smoothing filtering to the input R to remove the compression artifacts in the image 500 to generate an edge-preserving super pixel . In general, a pixel can represent a point with no meaning, but a set of points with the same information can be defined as a superpixel. For example, each set of points with the same property (eg, color) can be viewed as a single pixel and may be defined as a cluster.

As shown in the example of FIG. 5C, the search area processing module 410 generates a super-pixel and then applies an over-segmentation (e.g., irregular super-pixel segmentation) . According to one embodiment, the search area processing module 410 may oversegment the input R and generate an irregular super-pixel using graph-based segmentation in the image 500. According to one embodiment, the oversegment may include an image processing arithmetic operation that combines a pixel with surrounding pixels of similar color to produce a single large pixel (e.g., a superpixel). According to one embodiment, in the example of FIG. 5 (c), the top surface (e.g., the topmost region 530) of the image 500 is assumed as the background of the object 510 in the selection tool 530 can do. Thus, it is possible to assume a background (e.g., a superpixel) connected to a corresponding portion (e.g., region 530). Thus, as shown in the example of FIG. 5 (d), it is possible to reduce segments (e.g., regions) to be considered for search.

As shown in the example of FIG. 5 (d), the search area processing module 410 considers all super pixels connecting the upper surface (e.g., area 530) of the image 500 as background super pixels, And the remaining superpixels (e.g., two foreground regions 540 and 550) may be generated as candidates for the target object. Thus, in subsequent operations, it is determined whether the peak (e.g., the topmost pixel) of the remaining superpixels (e.g., foreground regions 540, 550) is the boundary of object 510 The correction range of the tool can be determined.

Referring again to Fig. 4, the minutiae processing module 420 may process the operation of correcting the selection tool. For example, the feature point processing module 420 may process operations to refine and correct the selection tool 520 based on the remaining superpixels 540, 400 in the example of FIG. 5 (d). According to one embodiment, the minutiae processing module 420 acquires a boundary edge of a number of objects of a Pascal visual object classes (VOC) dataset by learning / training A random forest model can be used to determine whether the remaining superpixel is the boundary of the object. In various embodiments, in the feature point processing module 420, the characteristics of the object learning (or training) include a first feature point learning (line feature learning) and a second feature point learning : Patch feature learning). According to various embodiments, the feature point processing module 420 includes a first feature point processing module 430 for extracting (processing) first feature points from an image (e.g., remaining super-pixels) based on the first feature point learning, And a second feature point processing module 440 for extracting (processing) second feature points from an image (e.g., remaining super pixels) based on the feature point learning.

A random forest (or random forest model), as described in various embodiments, is one of the analysis techniques (or models), for example, may be an extension concept of a decision tree. For example, a decision tree can be a way of generating and learning one training data in the same data set (dataset) to create one tree and predicting the target variable. On the other hand, Random Forest generates multiple training data through arbitrary restoration sampling in the same data set, generates multiple trees through multiple learning, combines them, and finally predicts target variables Lt; / RTI > According to one embodiment, a random forest extracts N learning data from arbitrary restoration data from data prepared for analysis, generates a tree by learning each of them, and outputs a final prediction result as a vote, an average, or a probability (Or predicted results) can be derived by combining them.

The first feature point processing module 430 may detect the overall line component in the image 500 (e.g., the superpixels 540, 550 remaining in the example of FIG. 5D).

According to one embodiment, the first feature point processing module 430 may generate (calculate) an edge map in the image 500 shown in the example of FIG. 5 (d). According to one embodiment, the first feature point processing module 430 may generate an edge map using various edge detection schemes (e.g., structural edges techniques). The first feature point processing module 430 resizes (or scales) the edge map so that the width of the generated edge map has a specific pixel (for example, 64 pixels, for example, scaling). For example, the first feature point processing module 430 may perform resizing to normalize the size of a feature that enters a random forest learning. The first feature point processing module 430 further extracts gradient information in four directions of the edge from a resized edge map (for example, referred to as multi-channel features) to generate a random forest ) Algorithm. Through this process, a learned model (a line learning model) can be generated on the basis of a general object line. In various embodiments, the electronic device learns And may be managed and used based on the first feature point processing module 430. [

The second feature point processing module 440 may detect the patch components in the image 500 (e.g., the superpixels 540, 550 remaining in the example of FIG. 5D).

According to one embodiment, in the above-described image 500, a line feature on a search range may be learned in large units, while it may be difficult to obtain information of a detailed portion of an object. Thus, in various embodiments, detailed information may be obtained through learning around a selection tool (e.g., a bounding box) and a portion where the object is in contact to perform more accurate learning. Typically, annotated verification data for detailed information about patch components (e.g., annotated verification (actual) information for detailed information for extracting patch components (eg, : GT (Ground-Truth) data) is difficult to find or the amount of data is insufficient. Also, in the case of a Pascal VOC dataset used in various embodiments, it often has only information about a selection tool (e.g., a bounding box). Thus, in various embodiments, over-segmentation in superpixel units may be used to make the peak portion of the superpixel and the portion of the selection tool in contact (or the portion near the selection tool) And can provide verification (actual) information through semi-supervised learning. For example, according to various embodiments, semi-instructional learning may be used to obtain a learning model (or training model). In various embodiments, the quasi-map learning is performed in such a way that when given the ground-truth annotation data is not given in pixel units but only in a selection tool (e.g., a bounding box) A method of learning the edge pattern of the object as shown in FIG.

6 is a diagram illustrating an operation for learning patch components in an electronic device according to various embodiments of the present invention.

6, in various embodiments, the color information of the 16x16 patch at the portion where the selection tool and the object are in contact and the gradient of the four directions of the edge are calculated, You can create a feature (eg, a multi-channel feature) and create a patch learning model by learning from a random forest based on the multiple components created. According to one embodiment, the example of FIG. 6 may be an example of a case where nine representative patterns are extracted by K-means clustering of a training 16x16 patch. In various embodiments, the electronic device may store the learned patch learning model through the learning process described above, and may be managed and used based on the second feature point processing module 440. [

According to various embodiments, the minutiae processing module 420 may be implemented by the first minutiae processing module 430 and the second minutiae processing module 440, as described above, the size of the selection tool 520 may be corrected to be a nearest neighbor (or nearest neighbor). According to one embodiment, the minutiae processing module 420 may include a first minutiae point (e.g., a line component) by the first minutiae processing module 430 and a second minutiae point (e.g., a patch) by the second minutiae processing module 440. [ Component of the selection tool 520. The selection tool 520 may be configured to adjust the at least one side of the selection tool 520. [ According to one embodiment, the feature point processing module 420 obtains the score of the peak point of each superpixel by multiplying two random forest regression scores, 520 can be corrected. An example of the result that the selection tool is corrected in accordance with various embodiments is shown in Figures 7a, 7b, 7c, or 7d.

FIGS. 7A, 7B, 7C and 7D are diagrams illustrating an example in which selection tool correction is applied in an electronic device according to various embodiments of the present invention.

As shown in FIGS. 7A, 7B, 7C and 7D, FIGS. 7A, 7B, 7C and 7D may show examples of bounding box refinement for a user's lazy dragging.

Referring to FIGS. 7A, 7B, 7C and 7, a bounding box 710 (hereinafter referred to as a first bounding box 710) has an optimized bounding box in the case where the user draws precisely on an object to be selected And a bounding box 720 (hereinafter referred to as a second bounding box 720) represents a bounding box that the user has substantially dragged (e.g., lazy dragging) (730) may, according to various embodiments, represent the bounding box in which the second bounding box 720 is calibrated.

As shown in FIGS. 7A, 7B, and 7C, regardless of the user's lazy dragging, the second bounding box 720 drawn by the user is placed in the first (or first) May generally correspond to the bounding box 710. For example, FIGS. 7A, 7B, and 7C illustrate examples in which the correction of the bounding box for object selection has been successfully performed.

7D, if the intention of the user is not recognized according to a plurality of objects (or super pixels) (e.g., people, bicycles) around the bounding box 720 drawn by the user, May be provided as a third bounding box 730 by correcting the second bounding box 720 based on the most distinctive borders (e.g., the most distinctive borders). For example, FIG. 7D may show an example of the case where the correction of the bounding box for object selection is performed unsuccessfully. Thus, in various embodiments, for the case of FIG. 7d, the magnetic processing module 450 may provide for simple further correction based on user input.

Referring again to FIG. 4, the magnetic processing module 450 may provide magnetic functionality and invisible guiding. For example, the magnetic processing module 450 may process the motion of the selection tool by the magnetic function in units of super-pixels upon further correction of the corrected selection tool.

According to one embodiment, as illustrated in FIG. 7D, described above, the user may select a second bounding box 720 drawn, even though he or she actually wants to select a first object (such as a bicycle) as the first bounding box 710, When there are a plurality of objects (e.g., people, bicycles) in the vicinity, the second bounding box 720 is connected to the third bounding box 720 based on the most specific boundary (e.g., the boundary of the second object (730). For example, it may not be the correction intended by the user. Thus, in various embodiments, additional correction by the user may be possible. In various embodiments, for convenience of further correction of the user, it is possible to provide a corrected line motion in the extracted super-pixel unit.

7D, the user may select the top line 735 of the calibrated third bounding box 730 (e.g., touch) to select a second object (e.g., a bicycle) ) To perform user input to drag down. The magnetic processing module 450 may jump the upper line 735 in units of extracted super pixels when moving the upper line 735 of the third bounding box 730 downward corresponding to the user input have. For example, at the boundary of a superpixel where the current top line 735 is located (e.g., the boundary of the first object (e.g., a person)), the boundary of the next superpixel located below (e.g., : A position corresponding to the upper line 715 of the first bounding box 710, for example) at the boundary of the first bounding box 710).

As described above, electronic devices 101 and 201 according to various embodiments may include displays 160 and 260, memories 130 and 230, and displays 160 and 260 and memories 130 and 230, And wherein the processor (120, 210) displays an image through the display (160, 260) and displays a selection tool on the image corresponding to the user input determining a correction range based on the surrounding object of the selection tool when the user input is completed and correcting the selection tool to make the selection tool a nearest neighbor to the object based on the correction range, Can be configured to be displayed.

According to various embodiments, the processor may be configured to distinguish the image in superpixel units, corresponding to sensing the user input for the selection tool.

According to various embodiments, the processor may be configured to learn edge information around the drawn selection tool based on a random forest to determine the correction range.

According to various embodiments, the processor may be configured to remove unnecessary search areas through search area reduction in the image, and generate super pixels of the remaining search area that are not removed as candidate super pixels of the target object.

According to various embodiments, the processor may be configured to detect a line component and a patch component based on the candidate super-pixel, and to divide the boundary of the object by referring to the learned model for each detected component.

According to various embodiments, the processor may be configured to determine the correction range based on learned learning data based on a random forest.

According to various embodiments, the processor may utilize semi-supervised learning for acquisition of the training data.

According to various embodiments, the processor may be configured to adjust movement of the selection tool on a per-super-pixel basis, corresponding to sensing a user input for further correction of the corrected selection tool.

According to various embodiments, the processor includes a search area processing module (410) for processing an operation for search range reduction in the image when the selection tool is drawn corresponding to the user input, And a feature point processing module (420) for processing an operation of correcting the size of the selection tool so that the selection tool is closest to the object.

According to various embodiments, the processor may be configured to include a magnetic processing module 450 that processes the motion of the selection tool by a magnetic function in super-pixel units upon further correction of the corrected selection tool.

According to various embodiments, the processor may be configured to determine a correction range of the selection tool based on an area of the user's finger being touched.

Hereinafter, an operation method according to various embodiments of the present invention will be described with reference to the accompanying drawings. It should be noted, however, that the various embodiments of the present invention are not limited or limited by the following description, and can be applied to various embodiments based on the following embodiments. In the various embodiments of the present invention described below, a hardware approach will be described as an example. However, various embodiments of the present invention include techniques using both hardware and software, so that various embodiments of the present invention do not exclude a software-based approach.

8 is a diagram illustrating an operation for correcting a selection tool in an electronic device according to various embodiments of the present invention.

As shown in FIG. 8, according to various embodiments, there are three stages, such as a pre-stage 810, a main stage 820, and an optional stage 830, .

In various embodiments, the prior step 810 may be to perform a search region reduction 815 operation. For example, as described in the description with reference to FIG. 4 above, the pre-step 810 may be an operation of reducing unnecessary search areas and positive errors using graph-based segmentation. According to various embodiments, search area reduction 815 may generate (or separate) super pixel lattices and reduce the search area based on the generated super pixel lattices.

In various embodiments, the main step 820 may include automatically selecting the most promising boundaries in the selection tool (e.g., the bounding box) using edge-based RF regression Lt; / RTI > For example, as described in the description with reference to FIG. 4 described above, the main step 820 generates learning (training) data by learning (or training) the line component 821 and the patch component 823 (For example, deriving a random forest 825-based prediction result) by extracting training data and generating individually learned trees, respectively, to derive a final prediction result. In accordance with various embodiments, line component 821 and patch component 823 can be learned and extracted based on various edge detection schemes (e.g., structural edges techniques). In the main step 830, the selection tool can be automatically corrected based on the predicted results of the extracted line component 821 and patch component 823.

In various embodiments, the optional step 830 may be an operation to adjust the selection tool based on user input. For example, the optional step 830 may be a step of providing a magnetic function and guiding that is invisible to the user. According to one embodiment, the optional step 830 is based on at least one side of the selection tool corrected in the main step 820, based on user input, as described in the description with reference to Figures 4 and 7d, , But may be an operation that reuses superpixel lattices generated in the prior step 810 to facilitate additional user interaction (e.g., a user's selected cell adjustment operation). For example, in various embodiments, the adjustment of the corrected selection tool may take into account the directionality of the user input and adjust the selection tool by moving the corresponding side in the direction of the user's input in the direction of the superpixel.

According to various embodiments, the electronic device may use a super-pixel (e.g., graph-based segmentation technique) to reduce false positives, through pre-step 810, when the user selects an object with dragging, It is possible to speed up the operation speed. According to various embodiments, the electronic device can correct the selection tool by learning / training the edge information around the drawn selection tool with a random forest algorithm, via main step 820, when the user selects the object with dragging have. In various embodiments, semi-supervised learning may be used to obtain a learning model (or training model).

As described above, according to the various embodiments, even if the user performs dragging, it is possible to provide more accurate object selection by using the object characteristics around the selection tool. In various embodiments, the electronic device may use computer vision and machine learning techniques for automatic correction of the selection tool. According to one embodiment, the electronic device learns / traces the characteristics of the edges of the object in a random forest manner and uses a selection tool (e.g., a bounding function) based on a learned / trained model Box) (for example, two sides of a transverse direction and four sides such as two sides of a longitudinal length) can be corrected. An example of the operation of correcting the selection tool based on the prior step 810 and the main step 820, according to various embodiments, is shown in FIG.

9 is a diagram illustrating an example of calibrating and providing a selection tool in an electronic device according to various embodiments of the present invention.

As shown in FIG. 9, FIG. 9 may show an example of correcting and providing a selection tool that has been dragged by a user.

Referring to FIG. 9, in various embodiments, a random forest algorithm is used to generate multiple training (training) data through arbitrary restoration sampling in one and the same data set, as described above, Training) data trees (eg, 1st to 50th trees) can be created. In various embodiments, the final prediction result may be derived using a vote, an average, or a probability based on the generated training data tree.

According to one embodiment, it may be the case that the user selects an object (e.g. a chair) in the image, as in the example of Figure 9 (a), and it may be a case of lazy (or roughly) A case where the selection tool 910 is drawn around the object by dragging can be shown. According to one embodiment, the intersection over union (IoU) score of the selection tool 910 corresponding to the dragging of the user may be 0.7332. For example, (a) an example may indicate that the selection tool 910 loosely wrapped an object (e.g., a chair) with an IoU score of approximately 0.73. In various embodiments, the initial selection tool 910 corresponding to user dragging may be used as a seed. In various embodiments, the IoU may represent an indicator for evaluating the performance of the algorithm in an object detection technique. According to one embodiment, detection can be considered successful if the IoU score on a PASCAL VOC basis is above a certain score (e.g., a reference score).

According to one embodiment, as illustrated in FIG. 9B, the electronic device draws a selection tool 910 drawn by a user on the basis of a data tree for training (training) to obtain a final prediction result, The selection tool 910 may be calibrated to provide a corrected selection tool 920. [ According to one embodiment, the IoU score of the calibrated selection tool 910 may be as high as 0.9905. For example, (b) the example may indicate that the calibrated selection tool 920 wraps the object (e.g., chair) more accurately (or elaborately) with an IoU score of about 0.99.

(E.g., irregular superpixel segmentation), edge detection (e.g., structural edge detection), and edge analysis (e.g., random forest regression, etc.) for the sophistication of automatic correction of the selection tool for object selection Can be utilized.

10 is a flow diagram illustrating a method of operation of an electronic device in accordance with various embodiments of the present invention.

10, at operation 1001, a processor (e.g., processors 120 and 210 of FIG. 1 or 2, hereinafter referred to as processors) of an electronic device (e.g., electronic devices 101 and 201 of FIG. 1 or FIG. 2) (E.g., display device 210) may display an image through a display (e.g., display 160, 260 of FIG. 1 or FIG.

At operation 1003, the processor 210 may detect dragging on the displayed image. According to one embodiment, the user may perform dragging to select a particular object on the displayed image. In various embodiments, the user may lazy (or roughly) drag.

At operation 1005, the processor 210 may display on the image a selection tool that selects (or wraps) at least one object in response to dragging. According to various embodiments, the processor 210 may variably display the selection tool corresponding to the progress direction of the dragging while the dragging of the user proceeds. According to one embodiment, the processor 210 may process the selection tool to draw, starting from a position (e.g., a point) initially touched by the user in the image.

At operation 1007, the processor 210 may determine whether the dragging has been completed. According to one embodiment, the processor 210 may determine that the dragging is complete when the touch input (dragging) of the user drawing the selection tool is released.

In operation 1007, if the completion of dragging is not sensed (NO in operation 1007), for example, if the user's touch input is maintained or moved, the processor 210 proceeds to operation 1005, Performance can be handled.

In operation 1007, when the completion of dragging is detected (operation 1007), the processor 210 can determine the correction range of the selection tool in operation 1009. [ According to one embodiment, when the processor 210 determines that dragging is completed, the processor 210 can determine whether the selection tool is corrected based on data for training (or training) for correction of the selection tool. According to one embodiment, the processor 210 determines a final correction range using a random forest algorithm using a line learning model and a patch learning model, which are acquired (computed) through the preceding step 810 and the main step 820, . According to various embodiments, when determining the correction of the selection tool, the processor 210 may determine at least one side for correction of the selection tool and the correction range of the side based on the training data. In various embodiments, the operation of determining the correction range of the selection tool will be described with reference to Figs. 11 and 12 to be described later.

At operation 1011, the processor 210 may adjust the selection tool based on the determined correction range. According to one embodiment, the processor 210 may move (adjust) at least one side of the selection tool based on the inferred result based on the random forest algorithm to correct the selection tool.

At operation 1013, the processor 210 may display the adjusted selection tool on the image. According to various embodiments, adjustment of the selection tool may be performed automatically, depending on the setting of the electronic device, or may be performed based on user input. According to one embodiment, after displaying the initial selection tool according to the dragging of the user, when an option (e.g., menu) related to the automatic correction is selected by the user, the initial selection tool is corrected to display the corrected selection tool Can be processed.

At operation 1015, the processor 210 may process the performance of the operation. According to one embodiment, the processor 210 may process an image edit (e.g., resize, rotate, move, cut, copy, etc.) an object in the selection tool based on user input .

11 is a flow chart illustrating a method for determining a correction range of a selection tool in an electronic device according to various embodiments of the present invention.

Referring to FIG. 11, at operation 1101, a processor (e.g., processors 120 and 210 of FIG. 1 or 2, hereinafter referred to as processors) of an electronic device (e.g., electronic devices 101 and 201 of FIG. 1 or FIG. 2) (E. G., The processor 210) may process operations to reduce the search area in the image based on determining a correction decision of the selection tool drawn by the user. According to one embodiment, the processor 210 may use the search area processing module 410 to process an operation for search area reduction in the displayed image via the display. For example, the processor 210 may reduce unnecessary search areas prior to determining a substantial object belonging to the selection tool in the image.

According to various embodiments, the processor 210 may determine that a particular face (e.g., a right border, a left border, an upper border, or a lower face) of the object selected by the initial selection tool the search range can be determined to be half the width of the selection tool centered on the bottom border). According to one embodiment, the processor 210 may determine the search area while rotating the image according to a specific aspect of the reference.

According to various embodiments, the processor 210 may remove an artifact by applying an edge-preserving smoothing filter or the like based on the determined search area. According to various embodiments, the processor 210 may remove the artifacts from the image to create superpixels, and then apply over-segmentation (e.g., irregular superpixel segmentation) on a superpixel basis. According to one embodiment, the processor 210 may determine the background of the object in the selection tool in the image and exclude the determined super-pixel of the background from the search area.

According to various embodiments, the processor 210 may generate the remaining superpixels other than the superpixel excluded from the image as candidates of the target object. According to various embodiments, the processor 210 may determine whether the peak of the remaining super pixels (e.g., edge pixels) is the boundary of the object and determine the correction range of the selection tool.

At operation 1103, the processor 210 may correct the selection tool. In accordance with various embodiments, the operation of correcting the selection tool is described with reference to Fig.

12 is a flow chart illustrating a method of calibrating a selection tool in an electronic device in accordance with various embodiments of the present invention.

Referring to Figure 12, at an operation 1201 and an operation 1203, a processor (e.g., processor 120, 210, Figure 2 or Figure 2) of an electronic device (e.g., electronic device 101, 201 of Figure 1 or Figure 2) Hereinafter, the processor 210) may detect the line component and the patch component.

According to one embodiment, the processor 210 may use the feature point processing module 420 to process the operation of correcting the selection tool based on the candidate superpixel in the image. In various embodiments, operations 1201 and 1203 are not limited to the order shown, but may be performed sequentially, in parallel, or in reverse sequential order.

According to one embodiment, the processor 210 can determine whether the selected super-pixel is the boundary of an object using a random forest model. According to one embodiment, the processor 210 detects a line component (first feature point) and a patch component (second feature point) in an image (e.g., a selected superpixel) based on a line learning model and a patch learning model can do.

According to one embodiment, the processor 210 may detect the overall line component in the selected superpixel, based on the learned line-learning model through various edge detection schemes (e.g., structural edges techniques).

According to one embodiment, the processor 210 may determine, based on the learned patch-learning model around the portion of the selection tool and the object (e.g., the selected super-pixel) The entire patch component can be detected in the super pixel.

At operation 1205, the processor 210 may determine the correction range of the selection tool based on the line component and the patch component. According to one embodiment, the processor 210 may determine the correction range based on the training data learned with the random forest algorithm using edge information (e.g., line components, patch components) around the drawn selection tool. For example, the processor 210 may analyze the detected line and patch components as described above, and may determine the correction range (or size) of the selection tool to be as close as possible to the object based on the results of the analysis . According to one embodiment, the processor 210 may adjust at least one side of the selection tool based on the correction range.

According to various embodiments, the processor 210 may correct the selection tool in a variety of ways other than the manner described above. According to one embodiment, a user may use a finger to draw a selection tool on a display, and the processor 210 may determine and adjust the correction range of the selection tool based on the area (or width) Can be corrected. For example, the processor 210 may track the touched area (or area) of the user's finger and apply it to the correction range estimate by a portion of the tracked area or area. The processor 210 may adjust at least one side of the selection tool inward or outward to maximize proximity to the object in accordance with the determined correction range.

13 is a flow chart illustrating a method for selecting an object by correction of a selection tool in an electronic device according to various embodiments of the present invention.

Referring to Figure 13, at operation 1301, a processor (e.g., processor 120, 210 of Figure 1 or Figure 2, hereinafter referred to as processor) of an electronic device (e.g., electronic device 101, 201 of Figure 1 or Figure 2) (E.g., display 210) may display an image through a display (e.g., display 160, 260 of FIG. 1 or FIG. 2).

At operation 1303, the processor 210 may sense an option selection for object selection. According to one embodiment, the processor 210 may sense a selection tool option selection that can draw a selection tool from among various options associated with image editing. According to one embodiment, processor 210 may enter an edit mode that can draw a selection tool based on user input and wait for user input (e.g., dragging) when a selection tool option is selected.

At operation 1305, the processor 210 may detect dragging on the displayed image. According to one embodiment, the user may perform dragging to select a particular object on the displayed image. In various embodiments, the user may perform lazy (or roughly) dragging.

At operation 1307, the processor 210 may display a selection tool on the image to select (or wrap) at least one object in response to dragging. According to various embodiments, the processor 210 may variably display the selection tool corresponding to the progress direction of the dragging while the dragging of the user proceeds. According to one embodiment, the processor 210 may process the selection tool to draw, starting from a position (e.g., a point) initially touched by the user in the image.

In operation 1309, the processor 210 may determine whether or not dragging by the user has been completed. According to one embodiment, the processor 210 may determine that the dragging is complete when the touch input (dragging) of the user drawing the selection tool is released.

At operation 1309, if processor 210 determines that dragging is not complete (NO at operation 1309), the processor 210 may proceed to operation 1307 to process the operation following operation 1307. [

In operation 1309, when the processor 210 determines that dragging is completed (operation 1309), in operation 1311, the selection tool corresponding to the completion of dragging may be displayed. According to one embodiment, the processor 210 may display a selection tool corresponding to a size of a graphic including a diagonal line from a point where the touch input for dragging of the user is started to a point where the touch input is released.

At act 1313, the processor 210 may determine whether there is an automatic calibration request for the displayed selection tool. According to one embodiment, the processor 210 may determine that there is an automatic correction request when the automatic correction function is enabled in the settings of the electronic device. According to one embodiment, processor 210 may display a selection tool by a user and then determine that there is an automatic correction request based on a user input that selects a user's automatic correction option.

In operation 1313, if processor 210 determines that there is no automatic correction request (NO in operation 1313), then in operation 1331, the processor 210 may process the operation. According to one embodiment, processor 210 may perform image editing (e.g., resizing, rotating, moving, cropping, copying, etc.) of an object corresponding to a user input, for an object in a selection tool drawn by a user, Lt; / RTI >

In operation 1313, when processor 210 determines that there is an automatic correction request (operation 1313), in operation 1315, it may process the operation of reducing the search area in the image. In various embodiments, the processor 210 may determine whether the automatic correction function is activated, after the initial selection tool is displayed, or based on sensing a user input to select an automatic correction option after the initial selection tool is displayed, It is possible to perform an operation of determining a correction range for correcting the tool. In various embodiments, operation 1315 may correspond to the search area reduction operation described in the description with reference to operation 1101 of FIG. 11 described above.

At operations 1317 and 1319, processor 210 may process operations to detect line and patch components. In various embodiments, operations 1317 and 1319 may correspond to the line component detection operation and the patch component detection operation described in the description with reference to operation 1201 and operation 1203 of FIG. 12 described above.

At operation 1321, the processor 210 may determine the correction range of the selection tool based on the line component and the patch component. In various embodiments, operation 1321 may correspond to the correction range determination operation described in the description section with reference to operation 1205 of FIG. 12 described above.

At operation 1323, the processor 210 may automatically adjust the selection tool based on the correction range. According to one embodiment, the processor 210 may determine at least one side of the selection tool to adjust, based on the determined correction range, and may move the at least one side based on the determination result.

At operation 1325, the processor 210 may display the adjusted selection tool on the image. According to various embodiments, the processor 210 may display a selection tool to maximize proximity to the object in the image.

At operation 1327, the processor 210 may determine whether an additional calibration request is detected. According to one embodiment, the processor 210 may determine whether there is a user input to select and drag the selection tool. For example, the user may wish to further calibrate the automatically calibrated selection tool, and in order to move at least one side of the selection tool, the user may select (touch) at least one side to drag in a particular direction.

In operation 1327, the processor 210 may process the performance of the operation in operation 1331 if no performance of the additional correction is detected (NO in operation 1327). According to one embodiment, the processor 210 processes (e.g., resizes, rotates, moves, cuts, copies, etc.) an image edit corresponding to a user input can do.

At operation 1327, the processor 210 may process the line correction based on the magnetic function, at operation 1329, if the execution of the additional correction is detected (operation 1327). According to one embodiment, the processor 210 may use the magnetic processing module 450 to process additional correction operations corresponding to user input. According to various embodiments, the processor 210 may process the motion of the corresponding line (e.g., selected side) of the selection tool on a per-super-pixel basis upon line correction according to user input.

At operation 1331, the processor 210 may process the operation. According to one embodiment, the processor 210 processes (e.g., resizes, rotates, moves, cuts, copies, etc.) an image edit corresponding to a user input can do.

As described above, the method of operating the electronic device according to various embodiments may include displaying an image through the displays 160 and 260, drawing a selection tool on the image corresponding to user input, Determining a correction range based on a surrounding object of the selection tool when the user input is completed; correcting the selection tool to correct for a nearest neighbor to the object based on the correction range, . ≪ / RTI >

According to various embodiments, the step of determining the correction range may include, in response to sensing the user input for the selection tool, dividing the image into superpixels.

According to various embodiments, the step of determining the correction range may include determining the correction range by learning edge information around the drawn selection tool based on a random forest.

According to various embodiments, the step of determining the correction range includes the steps of removing an unnecessary search area through a search area reduction in the image, generating a super pixel of the remaining search area as a candidate super pixel of the target object Process.

According to various embodiments, the step of determining the correction range may include the steps of detecting a line component and a patch component based on the candidate super-pixel, dividing the boundary of the object by referring to the learned model of each detected component Process.

According to various embodiments, the step of determining the correction range may include determining the correction range based on the learned data based on a random forest, And semi-supervised learning may be used.

Detecting a user input for further correction of the corrected selection tool according to various embodiments; and adjusting a movement of the selection tool in units of superpixels in response to the user input.

According to various embodiments, the correcting step may include: processing an operation for a search range reduction in the image when the selection tool is drawn corresponding to the user input, And correcting the size of the selection tool so that the size of the selection tool is closest to the size of the selection tool.

According to various embodiments, the step of determining the correction range may include determining a correction range of the selection tool based on an area of the user's finger that is touched.

The various embodiments of the present invention disclosed in the present specification and drawings are merely illustrative examples of the present invention and are not intended to limit the scope of the present invention in order to facilitate understanding of the present invention. Accordingly, the scope of the present invention should be construed as being included in the scope of the present invention, all changes or modifications derived from the technical idea of the present invention.

101, 201: Electronic device
120, 210: Processor
130, 230: memory
160, 260: Display
400: correction module
410: Search area processing module
420: Feature point processing module
450: Magnetic processing module

Claims (20)

In an electronic device,
display;
Memory; And
And a processor operably coupled to the display and the memory,
Displaying an image through the display,
Drawing a selection tool on the image corresponding to a user input,
Determining a correction range based on a surrounding object of the selection tool when the user input is completed,
And to correct the selection tool to be a nearest neighbor to the object based on the correction range.
2. The apparatus of claim 1,
And to distinguish the images in superpixel units, corresponding to sensing the user input for the selection tool.
3. The apparatus of claim 2,
And the edge information around the drawn selection tool is learned based on a random forest to determine the correction range.
4. The apparatus of claim 3,
Removing an unnecessary search area through the search area reduction in the image,
And to generate a super-pixel of the remaining non-removed search area as a candidate super-pixel of the target object.
5. The apparatus of claim 4,
Detecting a line component and a patch component based on the candidate super pixel,
And the boundaries of the objects are identified by referring to the learned models for the detected components.
3. The apparatus of claim 2,
And to determine the correction range based on the learned learning data based on a random forest.
7. The apparatus of claim 6,
And using semi-supervised learning to obtain the learning data.
3. The apparatus of claim 2,
And to adjust the movement of the selection tool on a per-super-pixel basis, corresponding to sensing a user input for further correction of the corrected selection tool.
2. The apparatus of claim 1,
A search area processing module that processes an operation for search range reduction in the image when the selection tool is drawn corresponding to the user input; And
And a feature point processing module for processing an operation of correcting the size of the selection tool such that the selection tool is closest to the object.
2. The apparatus of claim 1,
And a magnetic processing module configured to process the movement of the selection tool by the magnetic function in units of superpixels at the time of further correction of the corrected selection tool.
2. The apparatus of claim 1,
And the user's finger is configured to determine a correction range of the selection tool based on the touched area.
A method of operating an electronic device,
A process of displaying an image through a display,
Drawing a selection tool on the image corresponding to user input,
Determining a correction range based on a surrounding object of the selection tool when the user input is completed,
And correcting the selection tool to be a nearest neighbor to the object based on the correction range.
13. The method of claim 12, wherein the step of determining the correction range comprises:
And separating the image into superpixel units corresponding to sensing the user input for the selection tool.
14. The method of claim 13, wherein the step of determining the correction range comprises:
And determining the correction range by learning edge information around the drawn selection tool based on a random forest.
15. The method of claim 14, wherein the step of determining the correction range comprises:
Removing an unnecessary search area through the search area reduction in the image,
And generating super pixels of the remaining non-removed search regions as candidate super pixels of the target object.
16. The method of claim 15, wherein the step of determining the correction range comprises:
Detecting a line component and a patch component based on the candidate super pixel;
And dividing the boundary of the object by referring to the learned model for each detected component.
14. The method of claim 13, wherein the step of determining the correction range comprises:
And determining the correction range based on the learned learning data based on a random forest,
Wherein semi-supervised learning is used for obtaining the learning data.
14. The method of claim 13,
Detecting a user input for further correction of the corrected selection tool,
And adjusting the movement of the selection tool in units of the superpixels in response to the user input.
13. The method of claim 12,
Processing an operation for a search range reduction in the image when the selection tool is drawn corresponding to the user input,
And correcting the size of the selection tool such that the selection tool is closest to the object.
13. The method of claim 12, wherein the step of determining the correction range comprises:
And determining a correction range of the selection tool based on the area of the user's finger being touched.
KR1020170002488A 2017-01-06 2017-01-06 Electronic device and operating method thereof KR20180081353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170002488A KR20180081353A (en) 2017-01-06 2017-01-06 Electronic device and operating method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170002488A KR20180081353A (en) 2017-01-06 2017-01-06 Electronic device and operating method thereof

Publications (1)

Publication Number Publication Date
KR20180081353A true KR20180081353A (en) 2018-07-16

Family

ID=63105695

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170002488A KR20180081353A (en) 2017-01-06 2017-01-06 Electronic device and operating method thereof

Country Status (1)

Country Link
KR (1) KR20180081353A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109298819A (en) * 2018-09-21 2019-02-01 Oppo广东移动通信有限公司 Method, apparatus, terminal and the storage medium of selecting object
KR20200092456A (en) * 2019-01-07 2020-08-04 한림대학교 산학협력단 Apparatus and method of correcting touch sensor input
WO2020183656A1 (en) * 2019-03-13 2020-09-17 日本電気株式会社 Data generation method, data generation device, and program
KR102310595B1 (en) * 2021-02-10 2021-10-13 주식회사 인피닉 Annotation method of setting object properties using proposed information, and computer program recorded on record-medium for executing method thereof
KR102310585B1 (en) * 2021-02-10 2021-10-13 주식회사 인피닉 Annotation method of assigning object simply, and computer program recorded on record-medium for executing method thereof
KR102343036B1 (en) * 2021-02-10 2021-12-24 주식회사 인피닉 Annotation method capable of providing working guides, and computer program recorded on record-medium for executing method thereof
KR102352942B1 (en) * 2021-01-13 2022-01-19 셀렉트스타 주식회사 Method and device for annotating object boundary information
KR102356909B1 (en) * 2021-05-13 2022-02-08 주식회사 인피닉 Annotation method of assigning object and setting object properties for learning data of artificial intelligence, and computer program recorded on record-medium for executing method thereof

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109298819A (en) * 2018-09-21 2019-02-01 Oppo广东移动通信有限公司 Method, apparatus, terminal and the storage medium of selecting object
KR20200092456A (en) * 2019-01-07 2020-08-04 한림대학교 산학협력단 Apparatus and method of correcting touch sensor input
WO2020183656A1 (en) * 2019-03-13 2020-09-17 日本電気株式会社 Data generation method, data generation device, and program
JPWO2020183656A1 (en) * 2019-03-13 2021-11-18 日本電気株式会社 Data generation method, data generation device and program
KR102352942B1 (en) * 2021-01-13 2022-01-19 셀렉트스타 주식회사 Method and device for annotating object boundary information
KR102310595B1 (en) * 2021-02-10 2021-10-13 주식회사 인피닉 Annotation method of setting object properties using proposed information, and computer program recorded on record-medium for executing method thereof
KR102310585B1 (en) * 2021-02-10 2021-10-13 주식회사 인피닉 Annotation method of assigning object simply, and computer program recorded on record-medium for executing method thereof
KR102343036B1 (en) * 2021-02-10 2021-12-24 주식회사 인피닉 Annotation method capable of providing working guides, and computer program recorded on record-medium for executing method thereof
KR102356909B1 (en) * 2021-05-13 2022-02-08 주식회사 인피닉 Annotation method of assigning object and setting object properties for learning data of artificial intelligence, and computer program recorded on record-medium for executing method thereof

Similar Documents

Publication Publication Date Title
EP3346696B1 (en) Image capturing method and electronic device
US10429905B2 (en) Electronic apparatus having a hole area within screen and control method thereof
US9904409B2 (en) Touch input processing method that adjusts touch sensitivity based on the state of a touch object and electronic device for supporting the same
CN110476189B (en) Method and apparatus for providing augmented reality functions in an electronic device
US10996847B2 (en) Method for providing content search interface and electronic device for supporting the same
CN107077292B (en) Cut and paste information providing method and device
US10917552B2 (en) Photographing method using external electronic device and electronic device supporting the same
KR20180081353A (en) Electronic device and operating method thereof
US10445485B2 (en) Lock screen output controlling method and electronic device for supporting the same
US10642437B2 (en) Electronic device and method for controlling display in electronic device
KR102500715B1 (en) Electronic apparatus and controlling method thereof
CN115097982A (en) Method for processing content and electronic device thereof
EP3336675B1 (en) Electronic devices and input method of electronic device
EP3125101A1 (en) Screen controlling method and electronic device for supporting the same
KR20180010029A (en) Method and apparatus for operation of an electronic device
US10726193B2 (en) Electronic device and operating method thereof
KR20180025763A (en) Method and apparatus for providing miracast
KR20180014614A (en) Electronic device and method for processing touch event thereof
US10091436B2 (en) Electronic device for processing image and method for controlling the same
KR20180037753A (en) Electronic apparatus and operating method thereof
KR102408942B1 (en) Method for processing input of electronic device and electronic device
KR20180020473A (en) Electronic apparatus and method for controlling thereof