KR20180081353A - Electronic device and operating method thereof - Google Patents
Electronic device and operating method thereof Download PDFInfo
- Publication number
- KR20180081353A KR20180081353A KR1020170002488A KR20170002488A KR20180081353A KR 20180081353 A KR20180081353 A KR 20180081353A KR 1020170002488 A KR1020170002488 A KR 1020170002488A KR 20170002488 A KR20170002488 A KR 20170002488A KR 20180081353 A KR20180081353 A KR 20180081353A
- Authority
- KR
- South Korea
- Prior art keywords
- selection tool
- electronic device
- various embodiments
- processor
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
Abstract
Description
Various embodiments of the present invention disclose methods and apparatus for object selection by dragging in an electronic device.
2. Description of the Related Art Recently, with the development of digital technology, a mobile phone, a smart phone, a tablet PC, a notebook, a personal digital assistant (PDA), a wearable device, various types of electronic devices such as a digital camera or a personal computer are widely used.
In electronic devices, various image editing functions (e.g., object selection function) are provided according to the needs of image editing using a user's electronic device. For example, an electronic device may select an object for editing (e.g., cropping, copying, etc.) in response to a user's dragging input, in an image displayed through the display. According to one embodiment, the user can select only a specific object in the image (e.g., cut out an unnecessary portion other than an object, etc.), or perform editing for copying.
The electronic device is capable of drawing (drawing) a selected type of tool (e.g., a bounding box (e.g., a rectangular box, a freeform box, an elliptical box, etc.) And can be displayed on the image. The user can adjust the selection tool displayed by dragging to a desired size (e.g., resize the box to include only the desired object). According to one embodiment, a selection tool (e.g., a bounding box) may be provided as a rectangular box containing a specific number of adjustment handles (e.g., four, eight, etc.) surrounding the object, Rotate, move, cut, copy, and so on. The electronic device has eight adjustment handles (e.g., polygons (e.g., rhombus, circle, etc.) that can select (e.g., touch) a selection tool to easily modify the selection tool drawn in response to dragging on the image. And the user can adjust the selection tool by moving the adjustment handle.
A dragging method for drawing a selection tool in an electronic device may be an operation of pointing a virtual rectangular box that the user thinks. Therefore, the user must perform a try-and-error repeatedly in order to accurately point the virtual rectangle. Further, when the user is pointing for dragging, the user can hide the screen to be pointed by the input tool (e.g., a user finger, an electronic pen, etc.), so that it is difficult for the user to perform an accurate dragging operation. For example, in order to accurately draw a selection tool on an object desired by a user, it is necessary to perform trial and error repeatedly, and there is a problem that it is necessary to sophistication and a lot of effort to accurately draw a desired object.
Various embodiments disclose a method and apparatus for automatically correcting a selection tool (e.g., a bounding box) so that a dragging operation can be performed more simply and easily when a user edits an image in an electronic device.
In various embodiments, a method and apparatus are provided for automatically correcting a selection tool (e.g., a bounding box) drawn by a user in an electronic device to provide a nearest neighbor to the object. do.
In various embodiments, a method and apparatus for automatically correcting a selection tool drawn for object selection in an image displayed via an electronic device based on the object is disclosed.
In various embodiments, when editing an image in an electronic device, the sides of the selection tool (e.g., each line segment that forms a polygon) drawn according to the dragging of the user are adjusted (e.g., corrected) based on the object, A method and apparatus capable of causing an object to be selected are disclosed.
Various embodiments disclose an object selection method and apparatus capable of selecting a desired object more precisely for a user's lazy dragging by using an object property around the selection tool in the image, .
In various embodiments, a method and apparatus are disclosed that can reduce false positives and speed up computations when a user selects an object with dragging in an electronic device.
In various embodiments, the drawn selection tool (e.g., the bounding box) may be used for edge detection and image segmentation techniques, even when the user lazy dragges to the nearest of the desired object in the image. A method and apparatus for automatically correcting, based at least in part, are disclosed.
An electronic device in accordance with various embodiments of the present invention includes a display, a memory, and a processor operatively associated with the display and the memory, the processor displaying an image through the display, Determining a correction range based on a surrounding object of the selection tool upon completion of the user input and selecting the correction tool based on the correction range based on the correction range, (nearest neighbor) so as to be displayed.
A method of operating an electronic device according to various embodiments of the present invention includes the steps of displaying an image through a display, drawing a selection tool on the image in response to a user input, Determining a correction range based on a surrounding object of the selection tool; and correcting and displaying the selection tool so as to be a nearest neighbor to the object based on the correction range.
In order to solve the above problems, various embodiments of the present invention may include a computer-readable recording medium recording a program for causing the processor to execute the method.
According to the electronic device and its method of operation according to various embodiments, the electronic device may be configured to detect the drawn selection tool (e.g., the bounding box) by edge detection (e.g., edge detection and image segmentation techniques based on at least a part of the image segmentation technique. Thus, according to various embodiments, when a user selects a particular object from the displayed image, the user can automatically correct the selection tool by aligning the selection tool around the object, even if the user rags the object close to the desired object.
According to various embodiments, it is possible to more precisely select an object desired by a user with respect to dragging of a user's lag using an object characteristic of the surroundings of the selection tool in the image, thereby improving user's convenience. According to various embodiments, in the image editing operation of the user, the dragging operation for object selection can be performed more intuitively and simply, thereby improving the user's convenience. According to various embodiments, when a user selects an object with dragging in an electronic device, it is possible to reduce false positives and speed up the computation speed.
By an electronic device according to various embodiments, it can contribute to improving usability, convenience, or safety of the electronic device.
1 is a diagram illustrating a network environment including an electronic device according to various embodiments of the present invention.
2 is a block diagram of an electronic device in accordance with various embodiments of the present invention.
3 is a block diagram of a program module in accordance with various embodiments of the present invention.
4 is a diagram illustrating an example of a correction module associated with object selection in an electronic device according to various embodiments of the present invention.
5 is a diagram illustrating an operation for reducing a search area in an electronic device according to various embodiments of the present invention.
6 is a diagram illustrating an operation for learning patch components in an electronic device according to various embodiments of the present invention.
FIGS. 7A, 7B, 7C and 7D are diagrams illustrating an example in which selection tool correction is applied in an electronic device according to various embodiments of the present invention.
8 is a diagram illustrating an operation for correcting a selection tool in an electronic device according to various embodiments of the present invention.
9 is a diagram illustrating an example of calibrating and providing a selection tool in an electronic device according to various embodiments of the present invention.
10 is a flow diagram illustrating a method of operation of an electronic device in accordance with various embodiments of the present invention.
11 is a flow chart illustrating a method for determining a correction range of a selection tool in an electronic device according to various embodiments of the present invention.
12 is a flow chart illustrating a method of calibrating a selection tool in an electronic device in accordance with various embodiments of the present invention.
13 is a flow chart illustrating a method for selecting an object by correction of a selection tool in an electronic device according to various embodiments of the present invention.
Hereinafter, various embodiments of the present document will be described with reference to the accompanying drawings. It is to be understood that the embodiments and terminologies used herein are not intended to limit the invention to the particular embodiments described, but to include various modifications, equivalents, and / or alternatives of the embodiments. In connection with the description of the drawings, like reference numerals may be used for similar components. The singular expressions may include plural expressions unless the context clearly dictates otherwise. In this document, the expressions "A or B" or "at least one of A and / or B" and the like may include all possible combinations of the items listed together. Expressions such as "first," " second, "" first, " or "second, " But are not limited to these components. When it is mentioned that some (e.g., first) component is "(functionally or communicatively) connected" or "connected" to another (second) component, May be connected directly to the component, or may be connected through another component (e.g., a third component).
In this document, the term "configured to" as used herein is intended to encompass all types of information, including, but not limited to, "hardware" or "software" Quot ;, " modified to ", "made to be "," capable of ", or "designed to" interchangeably. In some situations, the expression "a device configured to" may mean that the device can "do " with other devices or components. For example, a processor configured (or configured) to perform the phrases "A, B, and C" may be implemented by executing one or more software programs stored in a memory device or a dedicated processor (e.g., an embedded processor) And a generic-purpose processor (e.g., a central processing unit (CPU) or an application processor (AP)) capable of performing the corresponding operations.
Electronic devices in accordance with various embodiments of the present document may be used in various applications such as, for example, smart phones, tablet PCs, mobile phones, videophones, electronic book readers, desktop PCs, laptop PCs, netbook computers, workstations, servers, personal digital assistants an assistant, a portable multimedia player (PMP), an MP3 player, a medical device, a camera, or a wearable device. A wearable device may be an accessory type (e.g., a watch, a ring, a bracelet, a bracelet, a necklace, a pair of glasses, a contact lens or a head-mounted-device), a fabric or garment- A body attachment type (e.g., a skin pad or tattoo), or an implantable circuit. In some embodiments, the electronic device may be, for example, a television, a digital video disk (DVD) player, an audio, a refrigerator, an air conditioner, a vacuum cleaner, an oven, a microwave oven, a washing machine, , A security control panel, a media box (e.g., Samsung HomeSync TM , Apple TV TM or Google TV TM ), a game console (e.g. Xbox TM , PlayStation TM ), an electronic dictionary, an electronic key, a camcorder, . ≪ / RTI >
In another embodiment, the electronic device may be used in a variety of medical devices (e.g., various portable medical measurement devices such as a blood glucose meter, a heart rate meter, a blood pressure meter, or a temperature meter), magnetic resonance angiography (MRA) A navigation system, a global navigation satellite system (GNSS), an event data recorder (EDR), a flight data recorder (FDR), an automobile infotainment device, a ship electronics Equipment such as marine navigation systems, gyro compass, avionics, security devices, head units for vehicles, industrial or home robots, drone, automated teller machines (ATMs) , Point of sale (POS) of shops, or internet items of things (eg light bulbs, sensors, sprinkler devices, fire alarms, thermostats, street lights, toasters, exercise equipment, hot water tanks,It may comprise at least one of a boiler, etc.). According to some embodiments, the electronic device may be a piece of furniture, a building / structure or part of an automobile, an electronic board, an electronic signature receiving device, a projector, or various measuring devices (e.g., Gas, or radio wave measuring instruments, etc.). In various embodiments, the electronic device is flexible or may be a combination of two or more of the various devices described above. The electronic device according to the embodiment of the present document is not limited to the above-described devices. In this document, the term user may refer to a person using an electronic device or a device using an electronic device (e.g., an artificial intelligence electronic device).
1 is a diagram illustrating a network environment including an electronic device according to various embodiments.
Referring to Figure 1, in various embodiments, an
The
The
The
The
The
The input /
The
In various embodiments, the
The
Wireless communication may be, for example, long term evolution (LTE), LTE Advance, code division multiple access (CDMA), wideband CDMA (WCDMA), universal mobile telecommunications system (UMTS), wireless broadband (WiBro) , Or global system for mobile communications (GSM), and the like. According to one embodiment, the wireless communication may include, for example, wireless fidelity, wireless gigabit alliance, Bluetooth, Bluetooth low energy, Zigbee, NFC field communication, a magnetic secure transmission, a radio frequency (RF), or a body area network (BAN). According to one embodiment, the wireless communication may comprise a GNSS. The GNSS may be, for example, a global positioning system (GPS), a global navigation satellite system (Glonass), a Beidou Navigation Satellite System (Beidou) or Galileo, the European global satellite-based navigation system. Hereinafter, in this document, " GPS " can be used interchangeably with " GNSS ".
The wired communication may include, for example, a universal serial bus (USB), a high definition multimedia interface (HDMI), a recommended standard 232 (RS-232), a power line communication, or a plain old telephone service And may include at least one.
The
Each of the first external electronic device 102 and the second external
The
2 is a block diagram of an electronic device according to various embodiments.
The
The processor 210 may, for example, drive an operating system or application program to control a number of hardware or software components coupled to the processor 210, and may perform various data processing and operations. The processor 210 may be implemented with, for example, a system on chip (SoC). According to one embodiment, the processor 210 may further include a graphics processing unit (GPU) and / or an image signal processor (ISP). Processor 210 may include at least some of the components shown in FIG. 2 (e.g., cellular module 221). Processor 210 may load and process instructions or data received from at least one of the other components (e.g., non-volatile memory) into volatile memory and store the resulting data in non-volatile memory.
In various embodiments, the processor 210 may control the overall operation of the
In various embodiments, the processor 210 may include an audio module 280, an interface 270, a display 260, a
According to various embodiments, the processor 210 may automatically select a set of selection tools (e.g., a bounding box (e.g., a rectangular box, a freeform box, an elliptical box, etc.) To enable more accurate object selection. ≪ RTI ID = 0.0 > [0031] < / RTI > According to one embodiment, the processor 210 may detect a selection tool (e.g., a bounding box) drawn in the
According to various embodiments, the processor 210 may include an operation of displaying an image through the display 260, an operation of drawing a selection tool on the image corresponding to the user input, An operation of determining a correction range based on the surrounding object of the object, and an operation of correcting and displaying the selection tool so as to be a nearest neighbor to the object based on the correction range.
According to various embodiments, the processor 210, when determining the correction range, may control the operation of separating the images into superpixel units, corresponding to sensing user input for the selection tool. In accordance with various embodiments, the processor 210 may include edge information (e.g., line features, patch features) around the drawn selection tool in a random forest, And determining the correction range based on the learning
The processing (or control) operation of the processor 210 in accordance with various embodiments is specifically described with reference to the following figures.
The communication module 220 may have the same or similar configuration as, for example, the
The cellular module 221 may provide, for example, voice calls, video calls, text services, or Internet services over a communication network. According to one embodiment, the cellular module 221 may perform identification and authentication of the
The
The
The
The
Memory 230 (e.g., memory 130) may include, for example, internal memory 232 or external memory 234. The built-in memory 232 may be implemented as, for example, volatile memory (e.g., dynamic random access memory (DRAM), synchronous random access memory (SRAM), or synchronous dynamic random access memory (SDRAM) (ROM), a flash ROM, a flash memory, a hard drive, or a solid state drive (SSD) such as a single time programmable read only memory (ROM), a programmable ROM (PROM), an erasable and programmable ROM (EPROM) , a solid state drive). The external memory 234 may be a flash drive, for example, a compact flash (CF), a secure digital (SD), a micro-SD, a mini-SD, an extreme digital (xD) Or a memory stick or the like. The external memory 234 may be functionally or physically connected to the
The memory 230 may be used by the processor 210 to cause the processor 210 to display an image through the display 260, draw a selection tool on the image in response to a user input, Data, or instructions related to determining a correction range based on a surrounding object of the selection tool and calibrating and displaying the selection tool to be a nearest neighbor to the object based on the correction range, Instructions can be stored.
In accordance with various embodiments, the memory 230 may be configured to allow the processor 210 to partition the images in superpixel units, corresponding to the sensing user input for the selection tool, Data, or instructions related to allowing information (e.g., line components, patch components) to be learned based on a random forest to determine a correction range.
The memory 230 may include an extended memory (e.g., external memory 234) or an internal memory (e.g., internal memory 232). The
The memory 230 may store one or more software (or software modules). For example, a software component may be an operating system software module, a communications software module, a graphics software module, a user interface software module, a moving picture experts group (MPEG) module, a camera software module, . ≪ / RTI > A module, which is a software component, can also be expressed as a set of instructions, so a module is sometimes referred to as an instruction set. Modules can also be expressed as programs. In various embodiments of the present invention, the memory 230 may include additional modules (instructions) in addition to the modules described above. Or may not use some modules (commands) as needed.
Operating system software modules may include various software components that control general system operations. Control of this general system operation may mean, for example, memory management and control, storage hardware (device) control and management, or power control and management. In addition, an operating system software module can also facilitate the communication between various hardware (devices) and software components (modules).
The communication software module may enable communication with other electronic devices, such as a wearable device, a smart phone, a computer, a server, or a handheld terminal, via the communication module 220 or interface 270. The communication software module may be configured with a protocol structure corresponding to the communication method.
The graphics software module may include various software components for providing and displaying graphics on the display 260. In various embodiments, the term graphic may be used to mean including text, a web page, an icon, a digital image, video, animation, etc. have.
The user interface software module may include various software components related to the user interface (UI). For example, how the state of the user interface is changed or under which conditions the change of the user interface state is made, and the like.
An MPEG module may include software components that enable processes and functions related to digital content (e.g., video, audio), such as creation, playback, distribution, and transmission of content.
The camera software module may include camera-related software components that enable camera-related processes and functions.
The application module may be a web browser including a rendering engine, an email, an instant message, word processing, keyboard emulation, an address book, A touch list, a widget, a digital rights management (DRM), an iris scan, a context cognition, a voice recognition, a positioning function a location determining function, a location based service, and the like. According to various embodiments of the present invention, the application module may include a correction module for automatic correction of the selection tool. For example, a search area processing module, a feature point processing module, a magnetic processing module, and the like may be included in the correction module.
The
The input device 250 may include, for example, a
Display 260 (e.g., display 160) may include panel 262,
The panel 262 may be embodied, for example, flexibly, transparently, or wearably. The panel 262 may comprise a
The
The interface 270 may include, for example, an
The interface 270 may receive data from another electronic device, or may receive power and communicate it to the respective configurations within the
The audio module 280 can, for example, convert sound and electrical signals in both directions. At least some of the components of the audio module 280 may be included, for example, in the input /
The
The
In accordance with various embodiments, the
The
The power management module 295 can, for example, manage the power of the
The
Each of the components described in this document may be composed of one or more components, and the name of the component may be changed according to the type of the electronic device. In various embodiments, an electronic device (e.g.,
3 is a block diagram of a program module in accordance with various embodiments.
According to one embodiment, program module 310 (e.g., program 140) includes an operating system that controls resources associated with an electronic device (e.g.,
3, the
The kernel 320 may include, for example, a system resource manager 321 and / or a device driver 323. The system resource manager 321 can perform control, allocation, or recovery of system resources. According to one embodiment, the system resource manager 321 may include a process manager, a memory manager, or a file system manager. The device driver 323 may include, for example, a display driver, a camera driver, a Bluetooth driver, a shared memory driver, a USB driver, a keypad driver, a WiFi driver, an audio driver, or an inter-process communication . The middleware 330 may provide various functions through the
According to one embodiment, middleware 330 includes a runtime library 335, an application manager 341, a window manager 342, a multimedia manager 343, A resource manager 344, a power manager 345, a database manager 346, a package manager 347, a connectivity manager 348, A notification manager 349, a location manager 350, a graphic manager 351, or a security manager 352, as shown in FIG. have.
The runtime library 335 may include, for example, a library module that the compiler uses to add new functionality via a programming language while the application 370 is executing. The runtime library 335 may perform input / output management, memory management, or arithmetic function processing.
The application manager 341 can manage the life cycle of the application 370, for example. The window manager 342 can manage graphical user interface (GUI) resources used in the screen. The multimedia manager 343 can recognize the format required for reproducing the media files and can perform encoding or decoding of the media file using a codec according to the format. The resource manager 344 can manage the source code of the application 370 or the space of the memory. The power manager 345 may, for example, manage the capacity or power of the battery and provide the power information necessary for operation of the electronic device. According to one embodiment, the power manager 345 may interface with a basic input / output system (BIOS). The database manager 346 may create, retrieve, or modify the database to be used in the application 370, for example. The package manager 347 can manage installation or update of an application distributed in the form of a package file.
The connectivity manager 348 may, for example, manage the wireless connection. The notification manager 349 may provide the user with an event such as, for example, an arrival message, an appointment, a proximity notification, and the like. The location manager 350 can manage the location information of the electronic device, for example. The graphic manager 351 can manage, for example, a graphic effect to be provided to the user or a user interface related thereto. The security manager 352 may provide, for example, system security or user authentication.
According to one embodiment, the middleware 330 may include a telephony manager for managing the voice or video call function of the electronic device, or a middleware module capable of forming a combination of the functions of the above-described components. According to one embodiment, the middleware 330 may provide a module specialized for each type of operating system. Middleware 330 may dynamically delete some existing components or add new ones.
The
The application 370 may include a
According to one embodiment, the application 370 may include an information exchange application that can support the exchange of information between the
According to one embodiment, the application 370 may include an application received from an external electronic device. At least some of the
As used herein, the term "module" includes a unit of hardware, software or firmware and may be, for example, a logic, a logic block, a component, ) And the like can be used interchangeably. A "module" may be an integrally constructed component or a minimum unit or part thereof that performs one or more functions. "Module" may be implemented either mechanically or electronically, for example, by application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs) And may include programmable logic devices. At least some of the devices (e.g., modules or functions thereof) or methods (e.g., operations) according to various embodiments may be stored in a computer readable storage medium (e.g.,
The computer readable recording medium may be a hard disk, a floppy disk, magnetic media (e.g., magnetic tape), optical recording media (e.g., compact disc read only memory (CD-ROM) a digital versatile disc, magneto-optical media (e.g., a floptical disk), internal memory, etc. The instructions may be executed by code or an interpreter generated by the compiler The module or program module according to various embodiments may include at least one or more of the elements described above, some of which may be omitted, or may further include other elements. Operations performed by a module, program module or other component, according to the examples, may be performed sequentially, in parallel, repetitively, or heuristically, May be executed in a different order, omitted, or other actions may be added.
According to various embodiments, the recording medium may include a computer-readable recording medium having recorded thereon a program for causing the
In various embodiments, the electronic device may include any device that uses one or more of a variety of processors, such as an AP, a CP, a GPU, and a CPU. For example, an electronic device according to various embodiments may include an information communication device, a multimedia device, a wearable device, an IoT device, or various other devices corresponding to those devices.
4 is a diagram illustrating an example of a correction module associated with object selection in an electronic device according to various embodiments of the present invention.
As shown in Fig. 4, Fig. 4 illustrates a selection tool (e.g., an
4, the correction module 400 for automatic correction of the selection tool may include a search region processing module 410, a feature
The search area processing module 410 may search the image displayed through the display (e.g., displays 160 and 260 of Figure 1 or Figure 2), if the selection tool is drawn by the user, ). ≪ / RTI > For example, the search area processing module 410 may reduce the unnecessary search area prior to determining a substantial object belonging to the selection tool in the image, for example, a random forest inference to be described later, You can handle actions that reduce false positives. An example of this is shown in Fig.
5 is a diagram illustrating an operation for reducing a search area in an electronic device according to various embodiments of the present invention.
According to various embodiments, when an initial selection tool 520 (e.g., a bounding box) is drawn by a user in an image that includes a particular object 510, it may be newly corrected in proportion to the size of the
According to various embodiments, the
As shown in the example of FIG. 5 (b), the search area processing module 410 may remove compression artifacts in the
As shown in the example of FIG. 5C, the search area processing module 410 generates a super-pixel and then applies an over-segmentation (e.g., irregular super-pixel segmentation) . According to one embodiment, the search area processing module 410 may oversegment the input R and generate an irregular super-pixel using graph-based segmentation in the
As shown in the example of FIG. 5 (d), the search area processing module 410 considers all super pixels connecting the upper surface (e.g., area 530) of the
Referring again to Fig. 4, the
A random forest (or random forest model), as described in various embodiments, is one of the analysis techniques (or models), for example, may be an extension concept of a decision tree. For example, a decision tree can be a way of generating and learning one training data in the same data set (dataset) to create one tree and predicting the target variable. On the other hand, Random Forest generates multiple training data through arbitrary restoration sampling in the same data set, generates multiple trees through multiple learning, combines them, and finally predicts target variables Lt; / RTI > According to one embodiment, a random forest extracts N learning data from arbitrary restoration data from data prepared for analysis, generates a tree by learning each of them, and outputs a final prediction result as a vote, an average, or a probability (Or predicted results) can be derived by combining them.
The first feature
According to one embodiment, the first feature
The second feature
According to one embodiment, in the above-described
6 is a diagram illustrating an operation for learning patch components in an electronic device according to various embodiments of the present invention.
6, in various embodiments, the color information of the 16x16 patch at the portion where the selection tool and the object are in contact and the gradient of the four directions of the edge are calculated, You can create a feature (eg, a multi-channel feature) and create a patch learning model by learning from a random forest based on the multiple components created. According to one embodiment, the example of FIG. 6 may be an example of a case where nine representative patterns are extracted by K-means clustering of a training 16x16 patch. In various embodiments, the electronic device may store the learned patch learning model through the learning process described above, and may be managed and used based on the second feature
According to various embodiments, the
FIGS. 7A, 7B, 7C and 7D are diagrams illustrating an example in which selection tool correction is applied in an electronic device according to various embodiments of the present invention.
As shown in FIGS. 7A, 7B, 7C and 7D, FIGS. 7A, 7B, 7C and 7D may show examples of bounding box refinement for a user's lazy dragging.
Referring to FIGS. 7A, 7B, 7C and 7, a bounding box 710 (hereinafter referred to as a first bounding box 710) has an optimized bounding box in the case where the user draws precisely on an object to be selected And a bounding box 720 (hereinafter referred to as a second bounding box 720) represents a bounding box that the user has substantially dragged (e.g., lazy dragging) (730) may, according to various embodiments, represent the bounding box in which the
As shown in FIGS. 7A, 7B, and 7C, regardless of the user's lazy dragging, the
7D, if the intention of the user is not recognized according to a plurality of objects (or super pixels) (e.g., people, bicycles) around the
Referring again to FIG. 4, the
According to one embodiment, as illustrated in FIG. 7D, described above, the user may select a
7D, the user may select the
As described above,
According to various embodiments, the processor may be configured to distinguish the image in superpixel units, corresponding to sensing the user input for the selection tool.
According to various embodiments, the processor may be configured to learn edge information around the drawn selection tool based on a random forest to determine the correction range.
According to various embodiments, the processor may be configured to remove unnecessary search areas through search area reduction in the image, and generate super pixels of the remaining search area that are not removed as candidate super pixels of the target object.
According to various embodiments, the processor may be configured to detect a line component and a patch component based on the candidate super-pixel, and to divide the boundary of the object by referring to the learned model for each detected component.
According to various embodiments, the processor may be configured to determine the correction range based on learned learning data based on a random forest.
According to various embodiments, the processor may utilize semi-supervised learning for acquisition of the training data.
According to various embodiments, the processor may be configured to adjust movement of the selection tool on a per-super-pixel basis, corresponding to sensing a user input for further correction of the corrected selection tool.
According to various embodiments, the processor includes a search area processing module (410) for processing an operation for search range reduction in the image when the selection tool is drawn corresponding to the user input, And a feature point processing module (420) for processing an operation of correcting the size of the selection tool so that the selection tool is closest to the object.
According to various embodiments, the processor may be configured to include a
According to various embodiments, the processor may be configured to determine a correction range of the selection tool based on an area of the user's finger being touched.
Hereinafter, an operation method according to various embodiments of the present invention will be described with reference to the accompanying drawings. It should be noted, however, that the various embodiments of the present invention are not limited or limited by the following description, and can be applied to various embodiments based on the following embodiments. In the various embodiments of the present invention described below, a hardware approach will be described as an example. However, various embodiments of the present invention include techniques using both hardware and software, so that various embodiments of the present invention do not exclude a software-based approach.
8 is a diagram illustrating an operation for correcting a selection tool in an electronic device according to various embodiments of the present invention.
As shown in FIG. 8, according to various embodiments, there are three stages, such as a pre-stage 810, a
In various embodiments, the
In various embodiments, the
In various embodiments, the
According to various embodiments, the electronic device may use a super-pixel (e.g., graph-based segmentation technique) to reduce false positives, through
As described above, according to the various embodiments, even if the user performs dragging, it is possible to provide more accurate object selection by using the object characteristics around the selection tool. In various embodiments, the electronic device may use computer vision and machine learning techniques for automatic correction of the selection tool. According to one embodiment, the electronic device learns / traces the characteristics of the edges of the object in a random forest manner and uses a selection tool (e.g., a bounding function) based on a learned / trained model Box) (for example, two sides of a transverse direction and four sides such as two sides of a longitudinal length) can be corrected. An example of the operation of correcting the selection tool based on the
9 is a diagram illustrating an example of calibrating and providing a selection tool in an electronic device according to various embodiments of the present invention.
As shown in FIG. 9, FIG. 9 may show an example of correcting and providing a selection tool that has been dragged by a user.
Referring to FIG. 9, in various embodiments, a random forest algorithm is used to generate multiple training (training) data through arbitrary restoration sampling in one and the same data set, as described above, Training) data trees (eg, 1st to 50th trees) can be created. In various embodiments, the final prediction result may be derived using a vote, an average, or a probability based on the generated training data tree.
According to one embodiment, it may be the case that the user selects an object (e.g. a chair) in the image, as in the example of Figure 9 (a), and it may be a case of lazy (or roughly) A case where the
According to one embodiment, as illustrated in FIG. 9B, the electronic device draws a
(E.g., irregular superpixel segmentation), edge detection (e.g., structural edge detection), and edge analysis (e.g., random forest regression, etc.) for the sophistication of automatic correction of the selection tool for object selection Can be utilized.
10 is a flow diagram illustrating a method of operation of an electronic device in accordance with various embodiments of the present invention.
10, at
At
At
At
In
In
At operation 1011, the processor 210 may adjust the selection tool based on the determined correction range. According to one embodiment, the processor 210 may move (adjust) at least one side of the selection tool based on the inferred result based on the random forest algorithm to correct the selection tool.
At
At
11 is a flow chart illustrating a method for determining a correction range of a selection tool in an electronic device according to various embodiments of the present invention.
Referring to FIG. 11, at
According to various embodiments, the processor 210 may determine that a particular face (e.g., a right border, a left border, an upper border, or a lower face) of the object selected by the initial selection tool the search range can be determined to be half the width of the selection tool centered on the bottom border). According to one embodiment, the processor 210 may determine the search area while rotating the image according to a specific aspect of the reference.
According to various embodiments, the processor 210 may remove an artifact by applying an edge-preserving smoothing filter or the like based on the determined search area. According to various embodiments, the processor 210 may remove the artifacts from the image to create superpixels, and then apply over-segmentation (e.g., irregular superpixel segmentation) on a superpixel basis. According to one embodiment, the processor 210 may determine the background of the object in the selection tool in the image and exclude the determined super-pixel of the background from the search area.
According to various embodiments, the processor 210 may generate the remaining superpixels other than the superpixel excluded from the image as candidates of the target object. According to various embodiments, the processor 210 may determine whether the peak of the remaining super pixels (e.g., edge pixels) is the boundary of the object and determine the correction range of the selection tool.
At
12 is a flow chart illustrating a method of calibrating a selection tool in an electronic device in accordance with various embodiments of the present invention.
Referring to Figure 12, at an
According to one embodiment, the processor 210 may use the feature
According to one embodiment, the processor 210 can determine whether the selected super-pixel is the boundary of an object using a random forest model. According to one embodiment, the processor 210 detects a line component (first feature point) and a patch component (second feature point) in an image (e.g., a selected superpixel) based on a line learning model and a patch learning model can do.
According to one embodiment, the processor 210 may detect the overall line component in the selected superpixel, based on the learned line-learning model through various edge detection schemes (e.g., structural edges techniques).
According to one embodiment, the processor 210 may determine, based on the learned patch-learning model around the portion of the selection tool and the object (e.g., the selected super-pixel) The entire patch component can be detected in the super pixel.
At
According to various embodiments, the processor 210 may correct the selection tool in a variety of ways other than the manner described above. According to one embodiment, a user may use a finger to draw a selection tool on a display, and the processor 210 may determine and adjust the correction range of the selection tool based on the area (or width) Can be corrected. For example, the processor 210 may track the touched area (or area) of the user's finger and apply it to the correction range estimate by a portion of the tracked area or area. The processor 210 may adjust at least one side of the selection tool inward or outward to maximize proximity to the object in accordance with the determined correction range.
13 is a flow chart illustrating a method for selecting an object by correction of a selection tool in an electronic device according to various embodiments of the present invention.
Referring to Figure 13, at
At
At
At
In
At
In
At
In
In
At
At
At operation 1323, the processor 210 may automatically adjust the selection tool based on the correction range. According to one embodiment, the processor 210 may determine at least one side of the selection tool to adjust, based on the determined correction range, and may move the at least one side based on the determination result.
At
At
In
At
At
As described above, the method of operating the electronic device according to various embodiments may include displaying an image through the
According to various embodiments, the step of determining the correction range may include, in response to sensing the user input for the selection tool, dividing the image into superpixels.
According to various embodiments, the step of determining the correction range may include determining the correction range by learning edge information around the drawn selection tool based on a random forest.
According to various embodiments, the step of determining the correction range includes the steps of removing an unnecessary search area through a search area reduction in the image, generating a super pixel of the remaining search area as a candidate super pixel of the target object Process.
According to various embodiments, the step of determining the correction range may include the steps of detecting a line component and a patch component based on the candidate super-pixel, dividing the boundary of the object by referring to the learned model of each detected component Process.
According to various embodiments, the step of determining the correction range may include determining the correction range based on the learned data based on a random forest, And semi-supervised learning may be used.
Detecting a user input for further correction of the corrected selection tool according to various embodiments; and adjusting a movement of the selection tool in units of superpixels in response to the user input.
According to various embodiments, the correcting step may include: processing an operation for a search range reduction in the image when the selection tool is drawn corresponding to the user input, And correcting the size of the selection tool so that the size of the selection tool is closest to the size of the selection tool.
According to various embodiments, the step of determining the correction range may include determining a correction range of the selection tool based on an area of the user's finger that is touched.
The various embodiments of the present invention disclosed in the present specification and drawings are merely illustrative examples of the present invention and are not intended to limit the scope of the present invention in order to facilitate understanding of the present invention. Accordingly, the scope of the present invention should be construed as being included in the scope of the present invention, all changes or modifications derived from the technical idea of the present invention.
101, 201: Electronic device
120, 210: Processor
130, 230: memory
160, 260: Display
400: correction module
410: Search area processing module
420: Feature point processing module
450: Magnetic processing module
Claims (20)
display;
Memory; And
And a processor operably coupled to the display and the memory,
Displaying an image through the display,
Drawing a selection tool on the image corresponding to a user input,
Determining a correction range based on a surrounding object of the selection tool when the user input is completed,
And to correct the selection tool to be a nearest neighbor to the object based on the correction range.
And to distinguish the images in superpixel units, corresponding to sensing the user input for the selection tool.
And the edge information around the drawn selection tool is learned based on a random forest to determine the correction range.
Removing an unnecessary search area through the search area reduction in the image,
And to generate a super-pixel of the remaining non-removed search area as a candidate super-pixel of the target object.
Detecting a line component and a patch component based on the candidate super pixel,
And the boundaries of the objects are identified by referring to the learned models for the detected components.
And to determine the correction range based on the learned learning data based on a random forest.
And using semi-supervised learning to obtain the learning data.
And to adjust the movement of the selection tool on a per-super-pixel basis, corresponding to sensing a user input for further correction of the corrected selection tool.
A search area processing module that processes an operation for search range reduction in the image when the selection tool is drawn corresponding to the user input; And
And a feature point processing module for processing an operation of correcting the size of the selection tool such that the selection tool is closest to the object.
And a magnetic processing module configured to process the movement of the selection tool by the magnetic function in units of superpixels at the time of further correction of the corrected selection tool.
And the user's finger is configured to determine a correction range of the selection tool based on the touched area.
A process of displaying an image through a display,
Drawing a selection tool on the image corresponding to user input,
Determining a correction range based on a surrounding object of the selection tool when the user input is completed,
And correcting the selection tool to be a nearest neighbor to the object based on the correction range.
And separating the image into superpixel units corresponding to sensing the user input for the selection tool.
And determining the correction range by learning edge information around the drawn selection tool based on a random forest.
Removing an unnecessary search area through the search area reduction in the image,
And generating super pixels of the remaining non-removed search regions as candidate super pixels of the target object.
Detecting a line component and a patch component based on the candidate super pixel;
And dividing the boundary of the object by referring to the learned model for each detected component.
And determining the correction range based on the learned learning data based on a random forest,
Wherein semi-supervised learning is used for obtaining the learning data.
Detecting a user input for further correction of the corrected selection tool,
And adjusting the movement of the selection tool in units of the superpixels in response to the user input.
Processing an operation for a search range reduction in the image when the selection tool is drawn corresponding to the user input,
And correcting the size of the selection tool such that the selection tool is closest to the object.
And determining a correction range of the selection tool based on the area of the user's finger being touched.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170002488A KR20180081353A (en) | 2017-01-06 | 2017-01-06 | Electronic device and operating method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170002488A KR20180081353A (en) | 2017-01-06 | 2017-01-06 | Electronic device and operating method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20180081353A true KR20180081353A (en) | 2018-07-16 |
Family
ID=63105695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170002488A KR20180081353A (en) | 2017-01-06 | 2017-01-06 | Electronic device and operating method thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20180081353A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109298819A (en) * | 2018-09-21 | 2019-02-01 | Oppo广东移动通信有限公司 | Method, apparatus, terminal and the storage medium of selecting object |
KR20200092456A (en) * | 2019-01-07 | 2020-08-04 | 한림대학교 산학협력단 | Apparatus and method of correcting touch sensor input |
WO2020183656A1 (en) * | 2019-03-13 | 2020-09-17 | 日本電気株式会社 | Data generation method, data generation device, and program |
KR102310595B1 (en) * | 2021-02-10 | 2021-10-13 | 주식회사 인피닉 | Annotation method of setting object properties using proposed information, and computer program recorded on record-medium for executing method thereof |
KR102310585B1 (en) * | 2021-02-10 | 2021-10-13 | 주식회사 인피닉 | Annotation method of assigning object simply, and computer program recorded on record-medium for executing method thereof |
KR102343036B1 (en) * | 2021-02-10 | 2021-12-24 | 주식회사 인피닉 | Annotation method capable of providing working guides, and computer program recorded on record-medium for executing method thereof |
KR102352942B1 (en) * | 2021-01-13 | 2022-01-19 | 셀렉트스타 주식회사 | Method and device for annotating object boundary information |
KR102356909B1 (en) * | 2021-05-13 | 2022-02-08 | 주식회사 인피닉 | Annotation method of assigning object and setting object properties for learning data of artificial intelligence, and computer program recorded on record-medium for executing method thereof |
-
2017
- 2017-01-06 KR KR1020170002488A patent/KR20180081353A/en unknown
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109298819A (en) * | 2018-09-21 | 2019-02-01 | Oppo广东移动通信有限公司 | Method, apparatus, terminal and the storage medium of selecting object |
KR20200092456A (en) * | 2019-01-07 | 2020-08-04 | 한림대학교 산학협력단 | Apparatus and method of correcting touch sensor input |
WO2020183656A1 (en) * | 2019-03-13 | 2020-09-17 | 日本電気株式会社 | Data generation method, data generation device, and program |
JPWO2020183656A1 (en) * | 2019-03-13 | 2021-11-18 | 日本電気株式会社 | Data generation method, data generation device and program |
KR102352942B1 (en) * | 2021-01-13 | 2022-01-19 | 셀렉트스타 주식회사 | Method and device for annotating object boundary information |
KR102310595B1 (en) * | 2021-02-10 | 2021-10-13 | 주식회사 인피닉 | Annotation method of setting object properties using proposed information, and computer program recorded on record-medium for executing method thereof |
KR102310585B1 (en) * | 2021-02-10 | 2021-10-13 | 주식회사 인피닉 | Annotation method of assigning object simply, and computer program recorded on record-medium for executing method thereof |
KR102343036B1 (en) * | 2021-02-10 | 2021-12-24 | 주식회사 인피닉 | Annotation method capable of providing working guides, and computer program recorded on record-medium for executing method thereof |
KR102356909B1 (en) * | 2021-05-13 | 2022-02-08 | 주식회사 인피닉 | Annotation method of assigning object and setting object properties for learning data of artificial intelligence, and computer program recorded on record-medium for executing method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3346696B1 (en) | Image capturing method and electronic device | |
US10429905B2 (en) | Electronic apparatus having a hole area within screen and control method thereof | |
US9904409B2 (en) | Touch input processing method that adjusts touch sensitivity based on the state of a touch object and electronic device for supporting the same | |
CN110476189B (en) | Method and apparatus for providing augmented reality functions in an electronic device | |
US10996847B2 (en) | Method for providing content search interface and electronic device for supporting the same | |
CN107077292B (en) | Cut and paste information providing method and device | |
US10917552B2 (en) | Photographing method using external electronic device and electronic device supporting the same | |
KR20180081353A (en) | Electronic device and operating method thereof | |
US10445485B2 (en) | Lock screen output controlling method and electronic device for supporting the same | |
US10642437B2 (en) | Electronic device and method for controlling display in electronic device | |
KR102500715B1 (en) | Electronic apparatus and controlling method thereof | |
CN115097982A (en) | Method for processing content and electronic device thereof | |
EP3336675B1 (en) | Electronic devices and input method of electronic device | |
EP3125101A1 (en) | Screen controlling method and electronic device for supporting the same | |
KR20180010029A (en) | Method and apparatus for operation of an electronic device | |
US10726193B2 (en) | Electronic device and operating method thereof | |
KR20180025763A (en) | Method and apparatus for providing miracast | |
KR20180014614A (en) | Electronic device and method for processing touch event thereof | |
US10091436B2 (en) | Electronic device for processing image and method for controlling the same | |
KR20180037753A (en) | Electronic apparatus and operating method thereof | |
KR102408942B1 (en) | Method for processing input of electronic device and electronic device | |
KR20180020473A (en) | Electronic apparatus and method for controlling thereof |