KR20160006909A - Method for processing image and storage medium storing the method - Google Patents

Method for processing image and storage medium storing the method Download PDF

Info

Publication number
KR20160006909A
KR20160006909A KR1020140086557A KR20140086557A KR20160006909A KR 20160006909 A KR20160006909 A KR 20160006909A KR 1020140086557 A KR1020140086557 A KR 1020140086557A KR 20140086557 A KR20140086557 A KR 20140086557A KR 20160006909 A KR20160006909 A KR 20160006909A
Authority
KR
South Korea
Prior art keywords
server
image
application program
information
extracted
Prior art date
Application number
KR1020140086557A
Other languages
Korean (ko)
Inventor
김진곤
Original Assignee
김진곤
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 김진곤 filed Critical 김진곤
Priority to KR1020140086557A priority Critical patent/KR20160006909A/en
Publication of KR20160006909A publication Critical patent/KR20160006909A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An embodiment of the present invention relates to an image processing method using an application program installed on a mobile terminal which communicates with a server. The image processing method of the present invention comprises the steps of: using the application program to set a keyword for extraction and transmitting the set keyword for extraction to the server; enabling the application program to receive the generated image by a camera on a smart device; enabling the application program to transmit the received image to the server; and enabling the application program to transmit the extracted information from the server, wherein the extracted information is information on the thing extracted from multiple things included in the image by the server according to the keyword for extraction.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to an image processing method,

An embodiment according to the concept of the present invention relates to a method of processing an image, and in particular to a method for an application program installed in a portable terminal to process an image generated by a smart device such as a wearable device.

With the recent advancement of information technology (IT) technology, wearable devices such as smart glasses and smart clocks are emerging. The wearable devices include only the components necessary for the minimum operation in order to minimize the inconvenience of wearing and maximize the use time. Therefore, most of the operations are connected to a portable terminal such as a smart phone, It is controlled by the installed dedicated application program.

Most of the wearable devices include a camera, which can be used more easily and quickly than a camera such as a smart phone or a dedicated camera.

Recently, various recognition technologies are being developed and utilized. One technique is to recognize objects included in an image.

Accordingly, there is a demand for utilizing smart devices such as wearable devices and image recognition technology.

1. Published Patent Publication: Publication No. 10-2011-0044294 (Published on April 28, 2011) 2. Open Patent Publication: Publication No. 10-2008-0020971 (published on Mar. 6, 2008)

An object of the present invention is to provide an application program installed in a portable terminal for extracting a desired object from objects included in an image using a smart device connected to the portable terminal and a server connected to the portable terminal, In particular, the present invention provides a method for extracting a desired object by setting a keyword and / or an area.

According to another aspect of the present invention, there is provided a storage medium storing a computer-readable program executable by the method.

An image processing method using an application program installed in a portable terminal communicating with a server according to an embodiment of the present invention includes the steps of setting an extraction keyword using the application program and transmitting the extracted keyword to the server, The program comprising the steps of: receiving an image generated by a camera of a smart device; transmitting the received application program to the server; and receiving the extraction information from the server, The extraction information is information on an object extracted by the server according to the extracted keyword among a plurality of objects included in the image.

According to an embodiment, the extraction information includes a name of the object extracted by the server according to the extracted keyword among the names of the plurality of objects included in the image.

According to another embodiment, the extraction information includes the material of the object extracted by the server according to the extracted keyword among the materials of the plurality of objects included in the image.

According to yet another embodiment, the extracted information includes the color of the object extracted by the server according to the extracted keyword among the colors of the plurality of objects included in the image.

According to yet another embodiment, the extraction information includes a pattern of the objects extracted by the server according to the extracted keywords among the patterns of the plurality of objects included in the image.

According to another embodiment, the extraction information includes the shape of the object extracted by the server according to the extracted keyword among the shapes of the plurality of objects included in the image.

The image processing method may further include the step of the application program transmitting the extracted information to the smart device.

A method of processing an image using an application program installed in a mobile terminal communicating with a server according to another embodiment of the present invention includes the steps of setting a partial image area using the application program and transmitting the set partial image area information to the server Receiving the image generated by the camera of the smart device by the application program, transmitting the received image to the server, and receiving the extracted information from the server Wherein the extraction information is included in the partial image area of the image and is information of an object extracted by the server.

According to another aspect of the present invention, there is provided an image processing method using an application program installed in a mobile terminal communicating with a server, the method comprising: setting a partial image area using the application program; The application program receiving a partial image corresponding to the partial image area from among images generated by a camera of the smart device; transmitting the partial image to the server by the application program; The application program receiving extraction information from the server, wherein the extraction information is information of an object included in the partial image and extracted by the server.

According to another aspect of the present invention, there is provided a method of processing an image using an application program installed in a mobile terminal communicating with a server, the method comprising: setting a partial image area using the application program; Receiving the image generated by the application program, and transmitting the partial image corresponding to the partial image area among the received images to the server; and receiving the extraction information from the server by the application program And the extraction information is included in the partial image and is information of an object extracted by the server.

A computer-readable storage medium according to an embodiment of the present invention stores a computer program capable of executing the image processing method.

An image processing method according to an exemplary embodiment of the present invention is an image processing method in which an application program installed in a portable terminal uses a smart device connected to the portable terminal and a server connected to the portable terminal, It is possible to extract only the object corresponding to the region and obtain information about the extracted object. The image processing method according to the embodiment of the present invention is particularly effective in providing object recognition for a visually impaired person, navigation for walking, recognition and avoidance of an obstacle, interworking with a commerce platform, .

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In order to more fully understand the drawings recited in the detailed description of the present invention, a detailed description of each drawing is provided.
1 schematically shows an image processing system for performing an image processing method according to an embodiment of the present invention.
2 is a schematic block diagram of the smart device shown in FIG.
3 is a schematic block diagram of the portable terminal shown in FIG.
4 is a flowchart for explaining an image processing method according to an embodiment of the present invention.
5 is a flowchart for explaining an image processing method according to another embodiment of the present invention.
6 is a flowchart for explaining an image processing method according to another embodiment of the present invention.
7 is a flowchart for explaining an image processing method according to another embodiment of the present invention.
8 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 8 (b) And a process of extracting objects from the server according to a processing method.
9 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 9 (b) And a process of extracting objects from the server according to a processing method.
10 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and FIG. 10 (b) And a process of extracting objects from the server according to a processing method.
11 is a data flow diagram according to the image processing method of Figs. 8 to 10. Fig.
12 schematically shows an example of images and objects for explaining an image processing method according to another embodiment of the present invention.
13 is a data flow diagram according to an embodiment of the image processing method of FIG.
14 is a data flow diagram according to another embodiment of the image processing method of Fig.
15 is a data flow diagram according to another embodiment of the image processing method of FIG.

It is to be understood that the specific structural or functional description of embodiments of the present invention disclosed herein is for illustrative purposes only and is not intended to limit the scope of the inventive concept But may be embodied in many different forms and is not limited to the embodiments set forth herein.

The embodiments according to the concept of the present invention can make various changes and can take various forms, so that the embodiments are illustrated in the drawings and described in detail herein. It should be understood, however, that it is not intended to limit the embodiments according to the concepts of the present invention to the particular forms disclosed, but includes all modifications, equivalents, or alternatives falling within the spirit and scope of the invention.

The terms first, second, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms may be named for the purpose of distinguishing one element from another, for example, without departing from the scope of the right according to the concept of the present invention, the first element may be referred to as a second element, The component may also be referred to as a first component.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between. Other expressions that describe the relationship between components, such as "between" and "between" or "neighboring to" and "directly adjacent to" should be interpreted as well.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like are used to specify that there are features, numbers, steps, operations, elements, parts or combinations thereof described herein, But do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof.

Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.

The objects described herein are used to mean both biological and inanimate objects. That is, the object may include a person.

In this specification, for convenience of description, it is described that an application program transmits or receives an image or information, but the subject that transmits or receives the image or the information is a portable terminal, and in accordance with the control of the application program, The image or information may be understood to be transmitted or received.

That is, an application program executed in the portable terminal transmits or receives a signal (or data), the transmitter or receiver included in the portable terminal transmits the signal (or data) to an external device under the control of the application program Or receiving from the external device.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings attached hereto.

1 schematically shows an image processing system for performing an image processing method according to an embodiment of the present invention.

1, an image processing system 10 capable of performing an image processing method according to an embodiment of the present invention includes a smart device 100-1 or 100-2, a portable terminal 200, And a first server (300).

The smart device 100-1 or 100-2 (collectively 100) may generate the image (or image data) necessary to carry out the present invention. The smart device 100 is a wearable device that can be worn by a user. In FIG. 1, a wearable device of a spectacle type 100-1 or a watch type 100-2 is shown. However, The smart device 100 may not necessarily be limited to a wearable device but may include other electronic devices as well.

Referring to FIG. 1, each of the smart devices 100-1 and 100-2 (generally 100) includes a camera 120-1 or 120-2 (collectively, 120), a main body 140-1 or 140-2 , Collectively 140), and each display 160-1 or 160-2 (collectively 160).

The camera 120-1 or 120-2 can convert the optical image into an electrical signal and generate an image according to the result of the conversion. For example, the camera 120-1 or 120-2 may be implemented as a complementary metal-oxide semiconductor (CMOS) image sensor. According to the embodiment, the camera 120-1 or 120-2 may include a color sensor for detecting color information and a depth sensor for detecting depth information.

The image may be a still image, a moving image, or a stereoscopic image.

According to the embodiment, the image may further include depth information (or distance information) between the smart device 100-1 or 100-2 and the object (or object). The depth information may be calculated (or output) using the depth sensor of the camera 120-1 or 120-2. The depth sensor can measure the depth (or distance) between the camera 120-1 or 120-2 and an object using a time-of-flight (TOF) measurement method.

That is, the depth sensor measures the delay time until a pulse-shaped signal (for example, a microwave, a light wave, or an ultrasonic wave) radiated from a source is reflected by an object and is returned And the distance (or depth) between the depth sensor and the object can be calculated based on the result of the measurement.

According to another embodiment, the camera 120-1 or 120-2 may be an infrared camera. Therefore, even when sufficient light is not provided (for example, at night), the camera 120-1 or 120-2 can generate an image.

The infrared camera may be a night vision camera that detects an infrared wavelength of about 0.7 mu m to 3 mu m.

The body 140-1 or 140-2 can control the operation of the components of the smart device 100-1 or 100-2 and wirelessly connect to the portable terminal 200 to exchange data give. The operation of each component of the main body 140-1 or 140-2 will be described in detail with reference to Fig.

The display 160-1 or 160-2 can visually display data output from the main body 140-1 or 140-2.

The portable terminal 200 is provided with an application program (an application or an app) capable of performing the image processing method according to the embodiment of the present invention, and the smart device 100-1 or 100-2 and the first And the server 300 can exchange data with each other.

For example, the portable terminal 200 may be a smart phone, a tablet PC, a mobile phone, a personal digital assistant (PDA), a mobile internet device (MID), an enterprise digital assistant (EDA) , A PND (personal navigation device or portable navigation device), an internet of things (IoT) device, or an internet of everything (IoE) device.

The first server 300 may receive or receive signals (or data) wirelessly with the portable terminal 200 through a wireless network or a mobile communication network. The first server 300 receives the image output from the portable terminal 200, stores it in the database 400, analyzes the stored image, and determines at least one of the objects included in the image Extracts the object, and transmits information about the extracted object to the portable terminal 200.

The first server 300 may convert the image transmitted from the portable terminal 200 into a scale invariant feature transform (SIFT), a speeded up robust features (SURF), a histogram of oriented gradient (HOG), and / or a modified census transform (MCT) , And the similarity of the extracted feature points to the feature points of objects registered in advance in the database 400 is calculated using at least one of the image recognition algorithms Can be analyzed.

According to the result of the analysis, the first server 300 can extract at least one object showing the highest degree of similarity, acquire information about the extracted object in the first server 300, Or may be obtained from another database that can be accessed by the first server 300.

For convenience of explanation, for example, assuming that an image of a dog is included in the image transmitted from the portable terminal 200, the first server 300 extracts the feature points from the image transmitted from the portable terminal 200 , And the degree of similarity between the extracted minutiae points and minutiae points of objects registered in advance in the database 400 can be calculated. As a result of the calculation, the similarity degree between the extracted feature points and the feature points of the dog will be highest. Therefore, the first server 300 extracts a thing called a dog from the image of the dog included in the image transmitted from the portable terminal 200 .

In accordance with embodiments, the image processing system 10 may further include a second server 500. [ The second server 500 may store an application program for performing an image processing method according to an embodiment of the present invention, and may communicate with a plurality of portable terminals through a wireless network. The second server 500 may provide a download service so that the user of the portable terminal 200 can download the application program according to the present invention to the portable terminal 200. [

For example, the second server 500 may be a server of an app store market for selling various application programs, a personal server for distributing the application programs, or an enterprise server, but is not limited thereto.

2 is a schematic block diagram of the smart device 100 shown in FIG.

1 and 2, the smart device 100 includes a camera 120, a processor 142, a memory 144, a wireless communication module 146, a sound output unit (or sound output device) 148, A geo-magnetic sensor 150, and a display 160.

The processor 142 may control operation of at least one of the components 120, 144, 146, 148, 150, and 160 of the smart device 100 via the bus 152. In order to control the operation of the components 120, 144, 146, 148, 150, and 160, the processor 142 may include a plurality of application programs (W_APP1 through W_APPm, The corresponding application program can be executed.

The processor 142 may control the image to be generated by the camera 120 using an application program (e.g., W_APP1) that is related to the operation of the camera 120, It is possible to control the geomagnetic sensor 150 to generate azimuth information using a program (e.g., W_APP2).

The memory 144 may store various data such as instructions necessary for the operation of the processor 142, a plurality of application programs W_APP1 to W_APPm, and / or an image. Depending on the embodiment, the memory 144 may operate as the working memory of the processor 142 and may be implemented as a cache, a dynamic random access memory (DRAM), or a static RAM (SRAM).

According to another embodiment, the memory 144 may be implemented as a flash-based memory. The flash-based memory may be a multimedia card (MMC), an embedded MMC (eMMC), a universal flash storage (UFS), or a solid state drive (SSD) . ≪ / RTI >

The memory 144 may be understood in a collective sense to include one or more memories.

The wireless communication module 146 may be used when receiving or receiving data between the smart device 100 and the portable terminal 200. The wireless communication module 146 may communicate with the portable terminal 200 using a wireless communication scheme such as bluetooth, near field communication (NFC), or Wi-Fi .

The sound output unit 148 can output sound according to the sound data output from the memory 144 under the control of the processor 142. [ According to embodiments, the sound output 148 may refer to speaker or earphone output terminals.

The geomagnetic sensor 150 may be used to acquire orientation information of an object. When the object is captured by the camera 120, the geomagnetic sensor 150 can generate and output azimuth information corresponding to the focus direction of the camera 120. [ For example, the geomagnetic sensor 150 may be a three-axis geomagnetic sensor.

According to an embodiment, the smart device 100 may further include a vibration motor (not shown). The vibration motor (not shown) may vibrate the smart device 100 by operating in response to a control signal output from the processor 142. The vibration motor (not shown) may be a linear vibration motor or a surface mount devices (SMD) type coin type vibration motor, but is not limited thereto.

FIG. 3 is a schematic block diagram of the portable terminal 200 shown in FIG.

1 and 3, the portable terminal 200 includes a processor 210, a memory 220, a wireless communication module 230, a sound output unit (or sound output apparatus) 240, a display 250, And a global positioning system (GPS) receiving module 260.

The processor 210 may control operation of at least one of the components 220, 230, 240, 250, and 260 via the bus 270. The processor 210 includes a plurality of application programs M_APP1 through M_APPn stored in the memory 220 for controlling at least one operation of the components 220, 230, 240, 250 and 260, ). ≪ / RTI > In particular, an application program (e.g., M_APP1) implemented in accordance with an embodiment of the present invention may be downloaded from the second server 500.

The memory 220 may store instructions necessary for the operation of the processor 210, a plurality of application programs M_APP1 to M_APPn, and / or various data. For example, the memory 220 may be implemented with the same or similar memory as the memory 144 described with reference to FIG.

The wireless communication module 230 receives or receives data between at least one of the smart device 100-1 or 100-2, the first server 300 and the second server 500 and the portable terminal 200 Can be used.

Depending on the embodiments, the wireless communication module 230 is in LTE TM (long term evolution), W-CDMA (wideband code division multiple access (W-CDMA)), Bluetooth, NFC, or method, such as Wi-Fi Can be implemented.

The sound output unit 240 may output a sound in accordance with the sound data output from the memory 220. According to an embodiment, the sound output section 240 may include speaker or earphone output terminals.

The display 250 may display information about objects included in the image transmitted from the first device 300 or the image transmitted from the smart device 100-1 or 100-2.

The GPS receiving module 260 may receive a plurality of GPS satellite signals and output the position information of the portable terminal 200 using the received signals. The GPS receiving module 260 may refer to a GPS receiver.

According to the embodiments, the portable terminal 200 may further include a vibration motor (not shown). The vibration motor can vibrate the portable terminal 200 by operating in response to the control signal output from the processor 210. [ The vibration motor may be a linear vibration motor or a SMD (surface mount devices) type coin type vibration motor, but is not limited thereto.

4 is a flowchart for explaining an image processing method according to an embodiment of the present invention. 1 to 4, the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 shoots (or captures) a thing (for example, a dog) And transmit the generated image to the portable terminal 200. [0050]

The application program M_APP1 installed in the portable terminal 200 can receive the image of the object output from the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 (S402).

According to embodiments, the application program M_APP1 sends distance information MAX_D to the smart device 100-1 or 100-2 to set the maximum distance to an object that can be captured in the smart device 100-1 or 100-2 -2) (S400).

According to the embodiments, the distance information MAX_D can be automatically set by the application program M_APP1, and can be manually set by the user using the application program M_APP1. When the distance information MAX_D is set, the camera 120-1 or 120-2 of the smart device 100-1 or 120-2 determines whether the object to be photographed is within a maximum distance corresponding to the distance information MAX_D For example, 2 m or 3 m), it is possible to photograph the object to generate an image.

The application program M_APP1 may transmit the image received from the smart device 100-1 or 100-2 to the first server 300 through the wireless network (S404).

According to an embodiment, when the smart device 100-1 or 100-2 is used as a black box for child or female security, the first server 300 transmits the received image to the first server 300 or the database 400).

The application program M_APP1 may receive information related to objects or images from the first server 300 (S406).

The information data may include various data related to the identified object according to the result analyzed by the first server 300. For example, the information data may include a name of an object, a detailed description of the object, location information of the object, and / or an image different from the image related to the object, and the name, the detailed description, The location information may be in the form of sound, text, and / or images.

According to the embodiment, when the application program M_APP1 can be interfaced with the commerce platform, the first server 300 can use the image information and the tag information of the object (e.g., radio frequency identification (RFID) And the information data may further include sales information (e.g., price, stock quantity, etc.) of the object.

The application program M_APP1 may control the operation of the sound output unit 148 or 240 so that the received information data may be output as a voice corresponding to the language selected by the user (S408).

The user can select any one of a plurality of languages (e.g., Korean, English, Chinese, or Japanese) (e.g., Korean or English) using the application program M_APP1, Can control the operation of the sound output unit 148 or 240 so that the information data can be outputted as a voice corresponding to the selected language.

For example, when the blind or the child is wearing the smart device 100-1 or 100-2, the portable terminal 200 and / or the smart device 100-1 or 100-2 can acquire information about the captured object It is possible to output by voice. Thus, information about the captured object can be output in the selected language, so that the blind or the child can obtain accurate information about the object.

When the information data is in the form of a text, the application program M_APP1 can control the operation of the sound output unit 148 or 240 so as to convert the data in the form of text into data in the form of sound and output it as a voice.

The information data may be output to the speaker or earphone output terminal through the sound output unit 240 of the portable terminal 200.

As described above, the application program M_APP1 may transmit the received information data to the display (Case 1 of S410) and / or the smart device 100 via the display 250 of the portable terminal 200 (S410 Case 2).

The smart device 100-1 or 100-2 outputs the received information data as a voice corresponding to the selected language through the sound output unit 148 of the smart device 100-1 or 100-2 and / Or through the display 160-1 or 160-2 of the device 100-1 or 100-2.

5 is a flowchart for explaining an image processing method according to another embodiment of the present invention. 1 to 3 and 5, the application program M_APP1 installed in the portable terminal 200 is managed by the camera 120-1 or 120-1 of the smart device 100-1 or 100-2 The image including the distance information to the photographed object and the azimuth information generated by the geomagnetic sensor 150 of the smart device 100-1 or 100-2 can be received together (S502).

The application program M_APP1 sends the distance information MAX_D to the smart device 100-1 or 100-2 to set the maximum distance of the object to be captured in the smart device 100-1 or 100-2, (S500).

In this case, as described with reference to Fig. 4, the distance information MAX_D can be set automatically or manually. The camera 120-1 or 120-2 of the smart device 100-1 or 120-2 captures the object only when the object to be captured is within the maximum distance according to the set distance information MAX_D, Can be generated.

The application program M_APP1 may transmit the location information of the portable terminal 200 generated by the GPS receiving module 260 of the portable terminal 200 to the first server 300 together with the received image and the orientation information S504).

The application program M_APP1 may receive information data related to objects (e.g., dogs) or images (e.g., images for dogs) from the first server 300 (S506).

The first server 300 may analyze the image using the at least one image recognition algorithm described above, and may identify the object based on the result of the analysis. At this time, the first server 300 can further identify the object by utilizing the received orientation information and the location information of the portable terminal 200. [

For example, when the object is a bus stop, the first server 300 analyzes the image of the bus stop, identifies the bus stop according to the result of the analysis, and stores the orientation information, the position information, The distance information to the object can be analyzed and the bus stop can be accurately identified according to the result of the analysis.

The application program M_APP1 may control the sound output unit 240 such that the received information data, the distance information, and the azimuth information are outputted as sounds corresponding to the selected language through the sound output unit 240 (S508 ).

According to the embodiment, the application program M_APP1 displays the information data, the distance information, and the orientation information through the display 250 of the portable terminal 200 (Case 1 of S510) and / or the smart device 100 (Case 2 in S510). The smart device 100-1 or 100-2 outputs the information data, the distance information, and the azimuth information as a voice through the sound output unit 148 of the smart device 100-1 or 100-2 And can be displayed through the display 160. Fig.

According to another embodiment, when the blind person is wearing the smart device 100-1 or 100-2, if the object is identified as an obstacle, the distance between the blind person and the object (Not shown) of the smart device 100-1 or 100-2 and / or the mobile terminal 200 by operating the vibration motor (not shown) of the smart device 100-1 or 100-2 and / The risk can be informed by vibrating the portable terminal 200.

6 and 7 are flowcharts for explaining an image processing method according to another embodiment of the present invention. 6 and 7, FIG. 6 may correspond to FIG. 4, and FIG. 7 may correspond to FIG. The application program W_APP1 may be the same or similar to each other except for the process of outputting the received information data (information data, distance information, and azimuth information received in the case of FIG. 7) by voice.

Referring to FIGS. 6 and 7, the information data may not include a sound form.

For example, when a report of disappearance of a child or a demented elderly person is received, when a police officer or the like wears the smart device 100-1 or 100-2 and shoots (or captures) a person, Or by finding information about whether or not the elderly person has been reported to have been reported, thereby helping to find a child or a demented elderly who has been reported as missing.

8 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 8 (b) FIG. 2 is a flowchart illustrating a process of extracting objects included in an image from a first server according to a processing method; FIG.

1 to 3, 8A and 8B, the first server 300 extracts an extraction keyword (for example, a puppy name) from an application program M_APP1 installed in the portable terminal 200, ), And can register the extracted keyword in the database 400 (S800). The user of the portable terminal 200 can input the extracted keyword into the application program M_APP1.

The first server 300 may receive the image IMG_1 from the application program M_APP1 and store the received image IMG_1 in the database 400 in operation S820. The image IMG_1 may be an image photographed from the camera 120-1 or 120-2 of the smart device 100-1 or 100-2. 8A, the image IMG_1 may include objects (e.g., signs, cars, puppies, bus stops, and trees), but may include objects included in the image IMG_1 The types and the number of types can vary.

The first server 300 uses the extracted keyword (e.g., a dog) transmitted from the application program M_APP1 to search for objects (e.g., signs, cars, dogs, bus stops, (E.g., a dog) corresponding to the extracted keyword (e.g., a dog) from the extracted keyword (S840). That is, only objects (for example, dogs) corresponding to the extracted keyword may be extracted from the objects included in the image IMG_1 (for example, signs, cars, dogs, bus stops, and trees) .

The first server 300 may generate (or acquire) information about the extracted object (e.g., a dog) and may transmit the generated information (e.g., name) to the application program M_APP1 (S860). The generated information will be described with reference to FIG.

9 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 9 (b) And a process of extracting objects from the server according to a processing method.

1 to 3, 9A and 9B, the first server 300 extracts an extraction keyword (for example, a glove (e.g., a glove, a glove, etc.) from an application program M_APP1 installed in the portable terminal 200, + Leather), and the extraction keyword can be registered in the database 400 (S900).

The first server 300 may receive the image IMG_2 from the application program M_APP1 and store the received image IMG_2 in the database 400 at step S920. The image IMG_2 may be an image photographed by the camera 120-1 or 120-2 of the smart device 100-1 or 100-2. The image IMG_2 in FIG. 9 (a) can be used to represent objects (e.g., leather gloves (e.g., leather gloves), cotton gloves (e.g. cotton gloves), rubber gloves And gloves made of fur (e.g., fur gloves).

The first server 300 capable of accessing the database 400 is able to access objects (e.g., leather gloves, cotton gloves, rubber gloves, and fur gloves) from the image IMG_2, Can be extracted (S940).

When objects corresponding to extracted keywords are extracted from objects included in the image IMG_2 (e.g., leather gloves, cotton gloves, rubber gloves, and fur gloves), the types of objects are the same as gloves, The first server 300 can extract only gloves of leather corresponding to the extracted keywords.

The first server 300 may generate (or acquire) information including a material of the extracted object (e.g., leather glove) and transmit the generated information to the application program M_APP1 (S960).

10 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and FIG. 10 (b) And a process of extracting objects from the server according to a processing method.

1 to 3, 10A and 10B, the first server 300 extracts an extracted keyword (for example, an automobile) from an application program M_APP1 installed in the portable terminal 200, + White), and registers the received extracted keyword in the database 400 (S1000).

The first server 300 may receive the image IMG_3 from the application program M_APP1 and store the received image IMG_3 in the database 400 in operation S820. The image IMG_3 may be generated by the camera 120-1 or 120-2 of the smart device 100-1 or 100-2. The image IMG_3 shown in FIG. 10A includes objects (for example, a white car, a red car, a black car, and a gray car).

The first server 300 may extract a specific object from the image IMG_3 stored in the database 400 according to the extracted keywords registered in the database 400 in operation S840. When an object (e.g., a white car) corresponding to an extracted keyword (e.g., car + white) is extracted from objects (e.g., white car, red car, black car, and gray car) included in the image IMG_3, Since the types of objects are the same as those of automobiles, but the colors of the objects are different from each other, the first server 300 can extract only white cars corresponding to the extracted keywords (e.g., car + white).

The first server 300 may generate (or acquire) information about the extracted object (e.g., a white automobile), and may transmit the generated information to the application program M_APP1 (S1060).

FIG. 11 is a data flow chart according to the image processing method of FIGS. 8 to 10. FIG.

Referring to FIGS. 1 to 3 and FIGS. 8 to 11, a user can set an extraction keyword using an application program M_APP1 installed in the portable terminal 200 (SET_KEY; S1100).

The user can directly input an extraction keyword into an application program M_APP1 using an input means (e.g., a touch pad or the like), or input at least one of a plurality of keywords preset by the application program M_APP1 have. For example, when the application program M_APP1 can perform the speech recognition function, the user can input the extraction keyword into the application program M_APP1 by voice.

For example, the extracted keywords may be "puppies" in FIG. 8, "gloves and leather" in FIG. 9, and "cars and white" in FIG.

The application program M_APP1 may transmit the extracted keywords set by the user to the first server 300 (TR_KEY; S1110). The first server 300 may register the received extracted keyword (R_KEY; S1120). Here, the registration may mean storing the extracted keyword in the database 400 accessible by the storage device of the first server 300 or the first server 300. [

The user can generate a corresponding image IMG_1, IMG_2, or IMG_3 using the camera 120-1 or 120-1 of the smart device 100-1 or 100-2 (GEN_IMG; S1130).

The generated image IMG_1, IMG_2, or IMG_3 may include a plurality of objects (or images for a plurality of objects). For example, the plurality of objects are a signboard, a car, a dog, a bus stop, and a tree in the case of FIG. 8A, and leather gloves, cotton gloves, , And in the case of FIG. 10 (a), it may be a white car, a red car, a black car, and a gray car.

The application program M_APP1 receives the image IMG_1, IMG_2, or IMG_3 transmitted from the smart device 100-1 or 100-2 (TR_IMG; S1140) and transmits the received image IMG_1, IMG_2, or IMG_3 To the first server 300 (TRR_IMG; S1150).

The smart device 100-1 or 100-2 can transmit the image IMG_1, IMG_2 or IMG_3 to the application program M_APP1 in a wireless communication manner such as Bluetooth, NFC or Wi-Fi, and the application program M_APP1 May transmit the image IMG_1, IMG_2, or IMG_3 to the first server 300 through a wireless communication scheme such as LTE , W-CDMA, Wi-Fi, wireless Internet, or mobile communication.

The first server 300 may extract an object corresponding to the extracted extracted keyword among a plurality of objects included in the received image IMG_1, IMG_2, or IMG_3 (EXT_OBJ; S1160). The first server 300 extracts feature points of each of a plurality of objects included in the image IMG_1, IMG_2, or IMG_3 using at least one of the image recognition algorithms described above, and extracts the extracted feature points from the database 400 The similarity degree of each of the plurality of objects can be identified according to the result of the calculation. The first server 300 can extract objects corresponding to the extracted extracted keywords from among the plurality of identified objects.

9, the "leather glove" can be extracted as the object to be extracted. In the case of Fig. 10, "white car "Quot; can be extracted as the object to be extracted.

The first server 300 generates information (GEN_INFO; S1170) about the extracted object (for example, a dog, a leather glove, or a car) and transmits the generated information to the application program M_APP1. Accordingly, the application program M_APP1 may receive the information transmitted from the first server 300 (TR_INFO; S1180).

The information about the extracted object may include information of a sound form, information of an image form, or information of a text form. The information on the extracted object may include, for example, a name, a material, a color, a position, a pattern, a shape, a sound, and / The information on the extracted object may include only some of the above-described examples depending on the type of the object, or may further include other information than the above-described examples.

The information about the extracted object may be acquired from the database 400 generated in the first server 300 or accessed by the first server 300 or acquired from the database operating continuously to the first server 300 .

The application program M_APP1 receives the information about the extracted object and controls the sound output unit 240 to output the received information as a voice through the sound output unit 240 of the portable terminal 200, To control the display 250 so that the display 250 can be displayed.

According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1190). The smart device 100-1 or 100-2 outputs the information transmitted from the application program M_APP1 as a voice through the sound output unit 148 of the smart device 100-1 or 100-2, Lt; / RTI >

FIG. 12 schematically illustrates an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and FIGS. 13 to 15 illustrate a data flow diagram according to embodiments of the image processing method of FIG. to be.

1 to 3, 12, and 13, a user can set a partial image area IREG using an application program M_APP1 installed in the portable terminal 200 (SET_IREG; S1300).

The partial image area IREG can be set by the user to extract only objects included in a specific area of the image (IMG). For example, the partial image area IREG may be set by a user, by a user, by a part of nine parts divided by a nine-division composition used in the camera 120-1 or 120-2, Can be set directly. In the embodiment shown in Fig. 12, the lower portion of the entire image IMG can be set by the user as the partial image area IREG.

The application program M_APP1 sends information about the partial image area IREG set in the entire image IMG to the first server 300 (TR_IREG; S1310), and the first server 300 transmits the received partial image area IREG (R_IREG) (S1320). Here, the registration may mean storing information about the partial image area IREG in the storage device of the first server 300 or the database 400 accessed by the first server 300. [

The user can generate an image IMG using the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 (GEN_IMG; S1330). The generated image (IMG) may include a plurality of objects. In Fig. 12, an example of a plurality of objects included in the image (IMG) is shown as a car and a dog.

The application program M_APP1 may receive the image IMG from the smart device 100-1 or 100-2 (TR_IMG; S1340) and send the received image IMG to the first server 300 (TRR_IMG S1350). The first server 300 may store the received image (IMG) in the database 400.

The first server 300 may extract an object included in the partial image area IREG from the received image IMG (EXT_IREG; S1360). The first server 300 extracts feature points of objects included in the partial image region IREG using at least one of the image recognition algorithms described above and outputs the extracted feature points to a database 400 The similarity of each feature point is calculated and the desired object can be extracted by identifying each object according to the result of the calculation.

For example, in the case of FIG. 12, since the object included in the partial image region IREG among the whole images IMG is a puppy, the object extracted by the first server 300 may be a puppy. According to an embodiment, when a plurality of objects are included in the partial image area IREG, the first server 300 may extract all of the plurality of objects from the partial image area IREG.

The first server 300 generates information about the extracted object (GEN_INFO; S1370), and transmits the generated information to the application program M_APP1. Accordingly, the application program M_APP1 may receive the information generated from the first server 300 (TR_INFO; S1380).

The information on the object is substantially the same as or similar to the information described with reference to FIG. 11, so that the description of the information will be omitted.

The portable terminal 200 including the application program M_APP1 can output the received information through the sound output unit 240 by voice or through the display 250. [

According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1390). The smart device 100-1 or 100-2 can output the information received from the application program M_APP1 via the sound output unit 148 by voice or through the display 160. [

1 to 3, 12, and 14, a user can set a partial image area IREG using an application program M_APP1 installed in the portable terminal 200 (SET_IREG; S1400). Since the partial image area IREG has been described with reference to FIG. 13, a description thereof will be omitted.

The application program M_APP1 may transmit the set partial image area (IREG) information to the smart device 100-1 or 100-2 (TRS_IREG; S1410).

The user can generate an image IMG using the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 (GEN_IMG; S1420). The generated image (IMG) may include a plurality of objects.

The smart device 100-1 or 100-2 may transmit a partial image corresponding to the partial image area IREG among the generated images IMG to the application program M_APP1 (TR_PIMG; S1430). That is, an area of the image IMG that does not correspond to the partial image area IREG is not transmitted to the portable terminal 200, so that the first server 300 can later store the partial image corresponding to the partial image area IREG Only objects included can be extracted. Accordingly, the amount of data transmitted from the portable terminal 200 to the first server 300 is reduced.

The application program M_APP1 may transmit the partial image received from the smart device 100-1 or 100-2 to the first server 300 (TRR_PIMG; S1440).

The first server 300 can extract only the objects included in the received partial image (EXT_PIMG; S1450). The first server 300 extracts the feature points of each object included in the partial image using at least one of the image recognition algorithms described above, It is possible to extract each object included in the partial image area (IREG) by calculating the similarity with the feature points of the object and identifying each object according to the result of the calculation.

According to an embodiment, when a plurality of objects are included in the partial image area IREG, the first server 300 may extract all of the plurality of objects.

The first server 300 generates information about the extracted at least one object (GEN_INFO; S1460), and transmits the generated information to the application program M_APP1. Accordingly, the application program M_APP1 may receive the information generated from the first server 300 (TR_INFO; S1470).

The information on the object has been described with reference to FIG. 11, and a description thereof will be omitted.

The portable terminal 200 including the application program M_APP1 to be executed can output the received information through the sound output unit 240 by voice or through the display 250. [

According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1480). The smart device 100-1 or 100-2 outputs the information received from the application program M_APP1 as a voice through the sound output unit 148 of the smart device 100-1 or 100-2, Lt; / RTI >

1 to 3, 12, and 15, a user can set a partial image area IREG using an application program M_APP1 installed in the portable terminal 200 (SET_IREG; S1500). Since the partial image area IREG is as shown in FIG. 12, a detailed description thereof will be omitted.

The user generates an image IMG (GEN_IMG; S1510) using the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 and transmits the generated image IMG to the application program M_APP1) (TR_IMG; S1520).

The application program M_APP1 may transmit a partial image corresponding to the partial image area IREG among the images IMG received from the smart device 100-1 or 100-2 to the first server 300 (TRRS_PIMG; S1530). That is, an area of the image IMG that does not correspond to the partial image area IREG is not transmitted to the first server 300, so that the first server 300 can later transmit the portion corresponding to the partial image area IREG, Only the objects included in the image can be extracted.

The first server 300 may extract an object included in the received partial image (EXT_PIMG; S1540). The first server 300 extracts feature points of each object included in the partial image using at least one of the image recognition algorithms described above, It is possible to calculate the similarity with the feature points of the object and extract each object by identifying each object included in the received partial image according to the result of the calculation.

The first server 300 may generate information about the extracted object (GEN_INFO; S1550), and may transmit the generated information to the application program M_APP1. Accordingly, the application program M_APP1 may receive information generated from the first server 300 (TR_INFO; S1560).

The information on the object is as described with reference to FIG. 11, and a description thereof will be omitted.

The portable terminal 200 including the application program M_APP1 to be executed can output the received information through the sound output unit 240 by voice or through the display 250. [

According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1570). The smart device 100-1 or 100-2 outputs the information received from the application program M_APP1 as a voice through the sound output unit 148 of the smart device 100-1 or 100-2, Lt; / RTI >

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

10: Image processing system
100: Smart devices
120: camera
140:
200: portable terminal
300: first server
400: Database
500: second server

Claims (17)

1. An image processing method using an application program installed in a portable terminal communicating with a server,
Setting an extraction keyword using the application program, and transmitting the extracted keyword to the server;
The application program receiving an image generated by a camera of a smart device;
The application program transmitting the received image to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the information about the object extracted by the server according to the extracted keyword among a plurality of objects included in the image.
The information processing apparatus according to claim 1,
And the name of the object extracted by the server according to the extracted keyword among the names of the plurality of objects included in the image.
The information processing apparatus according to claim 1,
And a material of the object extracted by the server according to the extracted keyword among materials of the plurality of objects included in the image.
The information processing apparatus according to claim 1,
And the color of the object extracted by the server according to the extracted keyword among the colors of the plurality of objects included in the image.
The information processing apparatus according to claim 1,
And a pattern of the object extracted by the server according to the extracted keyword among the patterns of the plurality of objects included in the image.
The information processing apparatus according to claim 1,
And the shape of the object extracted by the server according to the extracted keyword among shapes of the plurality of objects included in the image.
The method according to claim 1,
The application program sending the extracted information to the smart device.
1. An image processing method using an application program installed in a portable terminal communicating with a server,
Setting a partial image area using the application program, and transmitting the set partial image area information to the server;
The application program receiving an image generated by a camera of a smart device;
The application program transmitting the received image to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the image information is information of an object included in the partial image area of the image and extracted by the server.
9. The information processing apparatus according to claim 8,
And the name of the object.
9. The information processing apparatus according to claim 8,
And the material of the object.
9. The information processing apparatus according to claim 8,
And the color of the object.
9. The information processing apparatus according to claim 8,
And a pattern of the object.
9. The information processing apparatus according to claim 8,
And the shape of the object.
9. The method of claim 8,
The application program sending the extracted information to the smart device.
1. An image processing method using an application program installed in a portable terminal communicating with a server,
Setting a partial image area using the application program, and transmitting the set partial image area information to the smart device;
The application program receiving a partial image corresponding to the partial image area from an image generated by a camera of the smart device;
Transmitting the partial image received by the application program to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the partial image is information of an object extracted by the server.
1. An image processing method using an application program installed in a portable terminal communicating with a server,
Setting a partial image area using the application program;
The application program receiving an image generated by a camera of a smart device;
Transmitting a partial image corresponding to the partial image area among the received images to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the partial image is information of an object extracted by the server.
A computer-readable storage medium storing a computer program capable of executing the image processing method according to any one of claims 1 to 16.
KR1020140086557A 2014-07-10 2014-07-10 Method for processing image and storage medium storing the method KR20160006909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140086557A KR20160006909A (en) 2014-07-10 2014-07-10 Method for processing image and storage medium storing the method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140086557A KR20160006909A (en) 2014-07-10 2014-07-10 Method for processing image and storage medium storing the method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020160087608A Division KR20160085742A (en) 2016-07-11 2016-07-11 Method for processing image

Publications (1)

Publication Number Publication Date
KR20160006909A true KR20160006909A (en) 2016-01-20

Family

ID=55307665

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140086557A KR20160006909A (en) 2014-07-10 2014-07-10 Method for processing image and storage medium storing the method

Country Status (1)

Country Link
KR (1) KR20160006909A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564068A (en) * 2018-05-04 2018-09-21 连惠城 A kind of intelligence is explored the way method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080020971A (en) 2006-09-01 2008-03-06 하만 베커 오토모티브 시스템즈 게엠베하 Method for recognition an object in an image and image recognition device
KR20110044294A (en) 2008-08-11 2011-04-28 구글 인코포레이티드 Object identification in images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080020971A (en) 2006-09-01 2008-03-06 하만 베커 오토모티브 시스템즈 게엠베하 Method for recognition an object in an image and image recognition device
KR20110044294A (en) 2008-08-11 2011-04-28 구글 인코포레이티드 Object identification in images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564068A (en) * 2018-05-04 2018-09-21 连惠城 A kind of intelligence is explored the way method and system

Similar Documents

Publication Publication Date Title
US9451406B2 (en) Beacon methods and arrangements
US8862146B2 (en) Method, device and system for enhancing location information
RU2731370C1 (en) Method of living organism recognition and terminal device
CN113228064A (en) Distributed training for personalized machine learning models
US20130322711A1 (en) Mobile dermatology collection and analysis system
US11429807B2 (en) Automated collection of machine learning training data
US9584980B2 (en) Methods and apparatus for position estimation
US10535145B2 (en) Context-based, partial edge intelligence facial and vocal characteristic recognition
CN107995422B (en) Image shooting method and device, computer equipment and computer readable storage medium
US9607366B1 (en) Contextual HDR determination
US11604820B2 (en) Method for providing information related to goods on basis of priority and electronic device therefor
CN107944414B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US11995122B2 (en) Electronic device for providing recognition result of external object by using recognition information about image, similar recognition information related to recognition information, and hierarchy information, and operating method therefor
US11681756B2 (en) Method and electronic device for quantifying user interest
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
CN105608189A (en) Picture classification method and device and electronic equipment
CN110019907B (en) Image retrieval method and device
CN112053360B (en) Image segmentation method, device, computer equipment and storage medium
KR20160006909A (en) Method for processing image and storage medium storing the method
CN111178115B (en) Training method and system for object recognition network
CN105683959A (en) Information processing device, information processing method, and information processing system
CN113468929A (en) Motion state identification method and device, electronic equipment and storage medium
WO2017176711A1 (en) Vehicle recognition system using vehicle characteristics
KR20160085742A (en) Method for processing image
KR20120070888A (en) Method, electronic device and record medium for provoding information on wanted target

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
AMND Amendment
E601 Decision to refuse application
AMND Amendment
A107 Divisional application of patent