KR20160006909A - Method for processing image and storage medium storing the method - Google Patents
Method for processing image and storage medium storing the method Download PDFInfo
- Publication number
- KR20160006909A KR20160006909A KR1020140086557A KR20140086557A KR20160006909A KR 20160006909 A KR20160006909 A KR 20160006909A KR 1020140086557 A KR1020140086557 A KR 1020140086557A KR 20140086557 A KR20140086557 A KR 20140086557A KR 20160006909 A KR20160006909 A KR 20160006909A
- Authority
- KR
- South Korea
- Prior art keywords
- server
- image
- application program
- information
- extracted
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
An embodiment according to the concept of the present invention relates to a method of processing an image, and in particular to a method for an application program installed in a portable terminal to process an image generated by a smart device such as a wearable device.
With the recent advancement of information technology (IT) technology, wearable devices such as smart glasses and smart clocks are emerging. The wearable devices include only the components necessary for the minimum operation in order to minimize the inconvenience of wearing and maximize the use time. Therefore, most of the operations are connected to a portable terminal such as a smart phone, It is controlled by the installed dedicated application program.
Most of the wearable devices include a camera, which can be used more easily and quickly than a camera such as a smart phone or a dedicated camera.
Recently, various recognition technologies are being developed and utilized. One technique is to recognize objects included in an image.
Accordingly, there is a demand for utilizing smart devices such as wearable devices and image recognition technology.
An object of the present invention is to provide an application program installed in a portable terminal for extracting a desired object from objects included in an image using a smart device connected to the portable terminal and a server connected to the portable terminal, In particular, the present invention provides a method for extracting a desired object by setting a keyword and / or an area.
According to another aspect of the present invention, there is provided a storage medium storing a computer-readable program executable by the method.
An image processing method using an application program installed in a portable terminal communicating with a server according to an embodiment of the present invention includes the steps of setting an extraction keyword using the application program and transmitting the extracted keyword to the server, The program comprising the steps of: receiving an image generated by a camera of a smart device; transmitting the received application program to the server; and receiving the extraction information from the server, The extraction information is information on an object extracted by the server according to the extracted keyword among a plurality of objects included in the image.
According to an embodiment, the extraction information includes a name of the object extracted by the server according to the extracted keyword among the names of the plurality of objects included in the image.
According to another embodiment, the extraction information includes the material of the object extracted by the server according to the extracted keyword among the materials of the plurality of objects included in the image.
According to yet another embodiment, the extracted information includes the color of the object extracted by the server according to the extracted keyword among the colors of the plurality of objects included in the image.
According to yet another embodiment, the extraction information includes a pattern of the objects extracted by the server according to the extracted keywords among the patterns of the plurality of objects included in the image.
According to another embodiment, the extraction information includes the shape of the object extracted by the server according to the extracted keyword among the shapes of the plurality of objects included in the image.
The image processing method may further include the step of the application program transmitting the extracted information to the smart device.
A method of processing an image using an application program installed in a mobile terminal communicating with a server according to another embodiment of the present invention includes the steps of setting a partial image area using the application program and transmitting the set partial image area information to the server Receiving the image generated by the camera of the smart device by the application program, transmitting the received image to the server, and receiving the extracted information from the server Wherein the extraction information is included in the partial image area of the image and is information of an object extracted by the server.
According to another aspect of the present invention, there is provided an image processing method using an application program installed in a mobile terminal communicating with a server, the method comprising: setting a partial image area using the application program; The application program receiving a partial image corresponding to the partial image area from among images generated by a camera of the smart device; transmitting the partial image to the server by the application program; The application program receiving extraction information from the server, wherein the extraction information is information of an object included in the partial image and extracted by the server.
According to another aspect of the present invention, there is provided a method of processing an image using an application program installed in a mobile terminal communicating with a server, the method comprising: setting a partial image area using the application program; Receiving the image generated by the application program, and transmitting the partial image corresponding to the partial image area among the received images to the server; and receiving the extraction information from the server by the application program And the extraction information is included in the partial image and is information of an object extracted by the server.
A computer-readable storage medium according to an embodiment of the present invention stores a computer program capable of executing the image processing method.
An image processing method according to an exemplary embodiment of the present invention is an image processing method in which an application program installed in a portable terminal uses a smart device connected to the portable terminal and a server connected to the portable terminal, It is possible to extract only the object corresponding to the region and obtain information about the extracted object. The image processing method according to the embodiment of the present invention is particularly effective in providing object recognition for a visually impaired person, navigation for walking, recognition and avoidance of an obstacle, interworking with a commerce platform, .
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In order to more fully understand the drawings recited in the detailed description of the present invention, a detailed description of each drawing is provided.
1 schematically shows an image processing system for performing an image processing method according to an embodiment of the present invention.
2 is a schematic block diagram of the smart device shown in FIG.
3 is a schematic block diagram of the portable terminal shown in FIG.
4 is a flowchart for explaining an image processing method according to an embodiment of the present invention.
5 is a flowchart for explaining an image processing method according to another embodiment of the present invention.
6 is a flowchart for explaining an image processing method according to another embodiment of the present invention.
7 is a flowchart for explaining an image processing method according to another embodiment of the present invention.
8 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 8 (b) And a process of extracting objects from the server according to a processing method.
9 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 9 (b) And a process of extracting objects from the server according to a processing method.
10 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and FIG. 10 (b) And a process of extracting objects from the server according to a processing method.
11 is a data flow diagram according to the image processing method of Figs. 8 to 10. Fig.
12 schematically shows an example of images and objects for explaining an image processing method according to another embodiment of the present invention.
13 is a data flow diagram according to an embodiment of the image processing method of FIG.
14 is a data flow diagram according to another embodiment of the image processing method of Fig.
15 is a data flow diagram according to another embodiment of the image processing method of FIG.
It is to be understood that the specific structural or functional description of embodiments of the present invention disclosed herein is for illustrative purposes only and is not intended to limit the scope of the inventive concept But may be embodied in many different forms and is not limited to the embodiments set forth herein.
The embodiments according to the concept of the present invention can make various changes and can take various forms, so that the embodiments are illustrated in the drawings and described in detail herein. It should be understood, however, that it is not intended to limit the embodiments according to the concepts of the present invention to the particular forms disclosed, but includes all modifications, equivalents, or alternatives falling within the spirit and scope of the invention.
The terms first, second, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms may be named for the purpose of distinguishing one element from another, for example, without departing from the scope of the right according to the concept of the present invention, the first element may be referred to as a second element, The component may also be referred to as a first component.
It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between. Other expressions that describe the relationship between components, such as "between" and "between" or "neighboring to" and "directly adjacent to" should be interpreted as well.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises" or "having" and the like are used to specify that there are features, numbers, steps, operations, elements, parts or combinations thereof described herein, But do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof.
Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.
The objects described herein are used to mean both biological and inanimate objects. That is, the object may include a person.
In this specification, for convenience of description, it is described that an application program transmits or receives an image or information, but the subject that transmits or receives the image or the information is a portable terminal, and in accordance with the control of the application program, The image or information may be understood to be transmitted or received.
That is, an application program executed in the portable terminal transmits or receives a signal (or data), the transmitter or receiver included in the portable terminal transmits the signal (or data) to an external device under the control of the application program Or receiving from the external device.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings attached hereto.
1 schematically shows an image processing system for performing an image processing method according to an embodiment of the present invention.
1, an
The smart device 100-1 or 100-2 (collectively 100) may generate the image (or image data) necessary to carry out the present invention. The
Referring to FIG. 1, each of the smart devices 100-1 and 100-2 (generally 100) includes a camera 120-1 or 120-2 (collectively, 120), a main body 140-1 or 140-2 , Collectively 140), and each display 160-1 or 160-2 (collectively 160).
The camera 120-1 or 120-2 can convert the optical image into an electrical signal and generate an image according to the result of the conversion. For example, the camera 120-1 or 120-2 may be implemented as a complementary metal-oxide semiconductor (CMOS) image sensor. According to the embodiment, the camera 120-1 or 120-2 may include a color sensor for detecting color information and a depth sensor for detecting depth information.
The image may be a still image, a moving image, or a stereoscopic image.
According to the embodiment, the image may further include depth information (or distance information) between the smart device 100-1 or 100-2 and the object (or object). The depth information may be calculated (or output) using the depth sensor of the camera 120-1 or 120-2. The depth sensor can measure the depth (or distance) between the camera 120-1 or 120-2 and an object using a time-of-flight (TOF) measurement method.
That is, the depth sensor measures the delay time until a pulse-shaped signal (for example, a microwave, a light wave, or an ultrasonic wave) radiated from a source is reflected by an object and is returned And the distance (or depth) between the depth sensor and the object can be calculated based on the result of the measurement.
According to another embodiment, the camera 120-1 or 120-2 may be an infrared camera. Therefore, even when sufficient light is not provided (for example, at night), the camera 120-1 or 120-2 can generate an image.
The infrared camera may be a night vision camera that detects an infrared wavelength of about 0.7 mu m to 3 mu m.
The body 140-1 or 140-2 can control the operation of the components of the smart device 100-1 or 100-2 and wirelessly connect to the
The display 160-1 or 160-2 can visually display data output from the main body 140-1 or 140-2.
The
For example, the
The
The
According to the result of the analysis, the
For convenience of explanation, for example, assuming that an image of a dog is included in the image transmitted from the
In accordance with embodiments, the
For example, the
2 is a schematic block diagram of the
1 and 2, the
The
The
The
According to another embodiment, the
The
The
The
The
According to an embodiment, the
FIG. 3 is a schematic block diagram of the
1 and 3, the
The
The
The
Depending on the embodiments, the
The
The
The
According to the embodiments, the
4 is a flowchart for explaining an image processing method according to an embodiment of the present invention. 1 to 4, the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 shoots (or captures) a thing (for example, a dog) And transmit the generated image to the
The application program M_APP1 installed in the
According to embodiments, the application program M_APP1 sends distance information MAX_D to the smart device 100-1 or 100-2 to set the maximum distance to an object that can be captured in the smart device 100-1 or 100-2 -2) (S400).
According to the embodiments, the distance information MAX_D can be automatically set by the application program M_APP1, and can be manually set by the user using the application program M_APP1. When the distance information MAX_D is set, the camera 120-1 or 120-2 of the smart device 100-1 or 120-2 determines whether the object to be photographed is within a maximum distance corresponding to the distance information MAX_D For example, 2 m or 3 m), it is possible to photograph the object to generate an image.
The application program M_APP1 may transmit the image received from the smart device 100-1 or 100-2 to the
According to an embodiment, when the smart device 100-1 or 100-2 is used as a black box for child or female security, the
The application program M_APP1 may receive information related to objects or images from the first server 300 (S406).
The information data may include various data related to the identified object according to the result analyzed by the
According to the embodiment, when the application program M_APP1 can be interfaced with the commerce platform, the
The application program M_APP1 may control the operation of the
The user can select any one of a plurality of languages (e.g., Korean, English, Chinese, or Japanese) (e.g., Korean or English) using the application program M_APP1, Can control the operation of the
For example, when the blind or the child is wearing the smart device 100-1 or 100-2, the
When the information data is in the form of a text, the application program M_APP1 can control the operation of the
The information data may be output to the speaker or earphone output terminal through the
As described above, the application program M_APP1 may transmit the received information data to the display (Case 1 of S410) and / or the
The smart device 100-1 or 100-2 outputs the received information data as a voice corresponding to the selected language through the
5 is a flowchart for explaining an image processing method according to another embodiment of the present invention. 1 to 3 and 5, the application program M_APP1 installed in the
The application program M_APP1 sends the distance information MAX_D to the smart device 100-1 or 100-2 to set the maximum distance of the object to be captured in the smart device 100-1 or 100-2, (S500).
In this case, as described with reference to Fig. 4, the distance information MAX_D can be set automatically or manually. The camera 120-1 or 120-2 of the smart device 100-1 or 120-2 captures the object only when the object to be captured is within the maximum distance according to the set distance information MAX_D, Can be generated.
The application program M_APP1 may transmit the location information of the
The application program M_APP1 may receive information data related to objects (e.g., dogs) or images (e.g., images for dogs) from the first server 300 (S506).
The
For example, when the object is a bus stop, the
The application program M_APP1 may control the
According to the embodiment, the application program M_APP1 displays the information data, the distance information, and the orientation information through the
According to another embodiment, when the blind person is wearing the smart device 100-1 or 100-2, if the object is identified as an obstacle, the distance between the blind person and the object (Not shown) of the smart device 100-1 or 100-2 and / or the
6 and 7 are flowcharts for explaining an image processing method according to another embodiment of the present invention. 6 and 7, FIG. 6 may correspond to FIG. 4, and FIG. 7 may correspond to FIG. The application program W_APP1 may be the same or similar to each other except for the process of outputting the received information data (information data, distance information, and azimuth information received in the case of FIG. 7) by voice.
Referring to FIGS. 6 and 7, the information data may not include a sound form.
For example, when a report of disappearance of a child or a demented elderly person is received, when a police officer or the like wears the smart device 100-1 or 100-2 and shoots (or captures) a person, Or by finding information about whether or not the elderly person has been reported to have been reported, thereby helping to find a child or a demented elderly who has been reported as missing.
8 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 8 (b) FIG. 2 is a flowchart illustrating a process of extracting objects included in an image from a first server according to a processing method; FIG.
1 to 3, 8A and 8B, the
The
The
The
9 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and Fig. 9 (b) And a process of extracting objects from the server according to a processing method.
1 to 3, 9A and 9B, the
The
The
When objects corresponding to extracted keywords are extracted from objects included in the image IMG_2 (e.g., leather gloves, cotton gloves, rubber gloves, and fur gloves), the types of objects are the same as gloves, The
The
10 (a) schematically shows an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and FIG. 10 (b) And a process of extracting objects from the server according to a processing method.
1 to 3, 10A and 10B, the
The
The
The
FIG. 11 is a data flow chart according to the image processing method of FIGS. 8 to 10. FIG.
Referring to FIGS. 1 to 3 and FIGS. 8 to 11, a user can set an extraction keyword using an application program M_APP1 installed in the portable terminal 200 (SET_KEY; S1100).
The user can directly input an extraction keyword into an application program M_APP1 using an input means (e.g., a touch pad or the like), or input at least one of a plurality of keywords preset by the application program M_APP1 have. For example, when the application program M_APP1 can perform the speech recognition function, the user can input the extraction keyword into the application program M_APP1 by voice.
For example, the extracted keywords may be "puppies" in FIG. 8, "gloves and leather" in FIG. 9, and "cars and white" in FIG.
The application program M_APP1 may transmit the extracted keywords set by the user to the first server 300 (TR_KEY; S1110). The
The user can generate a corresponding image IMG_1, IMG_2, or IMG_3 using the camera 120-1 or 120-1 of the smart device 100-1 or 100-2 (GEN_IMG; S1130).
The generated image IMG_1, IMG_2, or IMG_3 may include a plurality of objects (or images for a plurality of objects). For example, the plurality of objects are a signboard, a car, a dog, a bus stop, and a tree in the case of FIG. 8A, and leather gloves, cotton gloves, , And in the case of FIG. 10 (a), it may be a white car, a red car, a black car, and a gray car.
The application program M_APP1 receives the image IMG_1, IMG_2, or IMG_3 transmitted from the smart device 100-1 or 100-2 (TR_IMG; S1140) and transmits the received image IMG_1, IMG_2, or IMG_3 To the first server 300 (TRR_IMG; S1150).
The smart device 100-1 or 100-2 can transmit the image IMG_1, IMG_2 or IMG_3 to the application program M_APP1 in a wireless communication manner such as Bluetooth, NFC or Wi-Fi, and the application program M_APP1 May transmit the image IMG_1, IMG_2, or IMG_3 to the
The
9, the "leather glove" can be extracted as the object to be extracted. In the case of Fig. 10, "white car "Quot; can be extracted as the object to be extracted.
The
The information about the extracted object may include information of a sound form, information of an image form, or information of a text form. The information on the extracted object may include, for example, a name, a material, a color, a position, a pattern, a shape, a sound, and / The information on the extracted object may include only some of the above-described examples depending on the type of the object, or may further include other information than the above-described examples.
The information about the extracted object may be acquired from the
The application program M_APP1 receives the information about the extracted object and controls the
According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1190). The smart device 100-1 or 100-2 outputs the information transmitted from the application program M_APP1 as a voice through the
FIG. 12 schematically illustrates an example of an image and objects for explaining an image processing method according to another embodiment of the present invention, and FIGS. 13 to 15 illustrate a data flow diagram according to embodiments of the image processing method of FIG. to be.
1 to 3, 12, and 13, a user can set a partial image area IREG using an application program M_APP1 installed in the portable terminal 200 (SET_IREG; S1300).
The partial image area IREG can be set by the user to extract only objects included in a specific area of the image (IMG). For example, the partial image area IREG may be set by a user, by a user, by a part of nine parts divided by a nine-division composition used in the camera 120-1 or 120-2, Can be set directly. In the embodiment shown in Fig. 12, the lower portion of the entire image IMG can be set by the user as the partial image area IREG.
The application program M_APP1 sends information about the partial image area IREG set in the entire image IMG to the first server 300 (TR_IREG; S1310), and the
The user can generate an image IMG using the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 (GEN_IMG; S1330). The generated image (IMG) may include a plurality of objects. In Fig. 12, an example of a plurality of objects included in the image (IMG) is shown as a car and a dog.
The application program M_APP1 may receive the image IMG from the smart device 100-1 or 100-2 (TR_IMG; S1340) and send the received image IMG to the first server 300 (TRR_IMG S1350). The
The
For example, in the case of FIG. 12, since the object included in the partial image region IREG among the whole images IMG is a puppy, the object extracted by the
The
The information on the object is substantially the same as or similar to the information described with reference to FIG. 11, so that the description of the information will be omitted.
The
According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1390). The smart device 100-1 or 100-2 can output the information received from the application program M_APP1 via the
1 to 3, 12, and 14, a user can set a partial image area IREG using an application program M_APP1 installed in the portable terminal 200 (SET_IREG; S1400). Since the partial image area IREG has been described with reference to FIG. 13, a description thereof will be omitted.
The application program M_APP1 may transmit the set partial image area (IREG) information to the smart device 100-1 or 100-2 (TRS_IREG; S1410).
The user can generate an image IMG using the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 (GEN_IMG; S1420). The generated image (IMG) may include a plurality of objects.
The smart device 100-1 or 100-2 may transmit a partial image corresponding to the partial image area IREG among the generated images IMG to the application program M_APP1 (TR_PIMG; S1430). That is, an area of the image IMG that does not correspond to the partial image area IREG is not transmitted to the
The application program M_APP1 may transmit the partial image received from the smart device 100-1 or 100-2 to the first server 300 (TRR_PIMG; S1440).
The
According to an embodiment, when a plurality of objects are included in the partial image area IREG, the
The
The information on the object has been described with reference to FIG. 11, and a description thereof will be omitted.
The
According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1480). The smart device 100-1 or 100-2 outputs the information received from the application program M_APP1 as a voice through the
1 to 3, 12, and 15, a user can set a partial image area IREG using an application program M_APP1 installed in the portable terminal 200 (SET_IREG; S1500). Since the partial image area IREG is as shown in FIG. 12, a detailed description thereof will be omitted.
The user generates an image IMG (GEN_IMG; S1510) using the camera 120-1 or 120-2 of the smart device 100-1 or 100-2 and transmits the generated image IMG to the application program M_APP1) (TR_IMG; S1520).
The application program M_APP1 may transmit a partial image corresponding to the partial image area IREG among the images IMG received from the smart device 100-1 or 100-2 to the first server 300 (TRRS_PIMG; S1530). That is, an area of the image IMG that does not correspond to the partial image area IREG is not transmitted to the
The
The
The information on the object is as described with reference to FIG. 11, and a description thereof will be omitted.
The
According to an embodiment, the application program M_APP1 may send the received information to the smart device 100-1 or 100-2 (TRR_INFO; S1570). The smart device 100-1 or 100-2 outputs the information received from the application program M_APP1 as a voice through the
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.
10: Image processing system
100: Smart devices
120: camera
140:
200: portable terminal
300: first server
400: Database
500: second server
Claims (17)
Setting an extraction keyword using the application program, and transmitting the extracted keyword to the server;
The application program receiving an image generated by a camera of a smart device;
The application program transmitting the received image to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the information about the object extracted by the server according to the extracted keyword among a plurality of objects included in the image.
And the name of the object extracted by the server according to the extracted keyword among the names of the plurality of objects included in the image.
And a material of the object extracted by the server according to the extracted keyword among materials of the plurality of objects included in the image.
And the color of the object extracted by the server according to the extracted keyword among the colors of the plurality of objects included in the image.
And a pattern of the object extracted by the server according to the extracted keyword among the patterns of the plurality of objects included in the image.
And the shape of the object extracted by the server according to the extracted keyword among shapes of the plurality of objects included in the image.
The application program sending the extracted information to the smart device.
Setting a partial image area using the application program, and transmitting the set partial image area information to the server;
The application program receiving an image generated by a camera of a smart device;
The application program transmitting the received image to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the image information is information of an object included in the partial image area of the image and extracted by the server.
And the name of the object.
And the material of the object.
And the color of the object.
And a pattern of the object.
And the shape of the object.
The application program sending the extracted information to the smart device.
Setting a partial image area using the application program, and transmitting the set partial image area information to the smart device;
The application program receiving a partial image corresponding to the partial image area from an image generated by a camera of the smart device;
Transmitting the partial image received by the application program to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the partial image is information of an object extracted by the server.
Setting a partial image area using the application program;
The application program receiving an image generated by a camera of a smart device;
Transmitting a partial image corresponding to the partial image area among the received images to the server; And
The application program receiving extraction information from the server,
The extraction information includes:
Wherein the partial image is information of an object extracted by the server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140086557A KR20160006909A (en) | 2014-07-10 | 2014-07-10 | Method for processing image and storage medium storing the method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140086557A KR20160006909A (en) | 2014-07-10 | 2014-07-10 | Method for processing image and storage medium storing the method |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160087608A Division KR20160085742A (en) | 2016-07-11 | 2016-07-11 | Method for processing image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160006909A true KR20160006909A (en) | 2016-01-20 |
Family
ID=55307665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140086557A KR20160006909A (en) | 2014-07-10 | 2014-07-10 | Method for processing image and storage medium storing the method |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160006909A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564068A (en) * | 2018-05-04 | 2018-09-21 | 连惠城 | A kind of intelligence is explored the way method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080020971A (en) | 2006-09-01 | 2008-03-06 | 하만 베커 오토모티브 시스템즈 게엠베하 | Method for recognition an object in an image and image recognition device |
KR20110044294A (en) | 2008-08-11 | 2011-04-28 | 구글 인코포레이티드 | Object identification in images |
-
2014
- 2014-07-10 KR KR1020140086557A patent/KR20160006909A/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080020971A (en) | 2006-09-01 | 2008-03-06 | 하만 베커 오토모티브 시스템즈 게엠베하 | Method for recognition an object in an image and image recognition device |
KR20110044294A (en) | 2008-08-11 | 2011-04-28 | 구글 인코포레이티드 | Object identification in images |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564068A (en) * | 2018-05-04 | 2018-09-21 | 连惠城 | A kind of intelligence is explored the way method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9451406B2 (en) | Beacon methods and arrangements | |
US8862146B2 (en) | Method, device and system for enhancing location information | |
RU2731370C1 (en) | Method of living organism recognition and terminal device | |
CN113228064A (en) | Distributed training for personalized machine learning models | |
US20130322711A1 (en) | Mobile dermatology collection and analysis system | |
US11429807B2 (en) | Automated collection of machine learning training data | |
US9584980B2 (en) | Methods and apparatus for position estimation | |
US10535145B2 (en) | Context-based, partial edge intelligence facial and vocal characteristic recognition | |
CN107995422B (en) | Image shooting method and device, computer equipment and computer readable storage medium | |
US9607366B1 (en) | Contextual HDR determination | |
US11604820B2 (en) | Method for providing information related to goods on basis of priority and electronic device therefor | |
CN107944414B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US11995122B2 (en) | Electronic device for providing recognition result of external object by using recognition information about image, similar recognition information related to recognition information, and hierarchy information, and operating method therefor | |
US11681756B2 (en) | Method and electronic device for quantifying user interest | |
WO2019105457A1 (en) | Image processing method, computer device and computer readable storage medium | |
CN105608189A (en) | Picture classification method and device and electronic equipment | |
CN110019907B (en) | Image retrieval method and device | |
CN112053360B (en) | Image segmentation method, device, computer equipment and storage medium | |
KR20160006909A (en) | Method for processing image and storage medium storing the method | |
CN111178115B (en) | Training method and system for object recognition network | |
CN105683959A (en) | Information processing device, information processing method, and information processing system | |
CN113468929A (en) | Motion state identification method and device, electronic equipment and storage medium | |
WO2017176711A1 (en) | Vehicle recognition system using vehicle characteristics | |
KR20160085742A (en) | Method for processing image | |
KR20120070888A (en) | Method, electronic device and record medium for provoding information on wanted target |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
AMND | Amendment | ||
A107 | Divisional application of patent |