CN105354252A - Information processing method and apparatus - Google Patents

Information processing method and apparatus Download PDF

Info

Publication number
CN105354252A
CN105354252A CN201510680764.4A CN201510680764A CN105354252A CN 105354252 A CN105354252 A CN 105354252A CN 201510680764 A CN201510680764 A CN 201510680764A CN 105354252 A CN105354252 A CN 105354252A
Authority
CN
China
Prior art keywords
mobile device
view data
described mobile
geographic coordinate
training image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510680764.4A
Other languages
Chinese (zh)
Inventor
蒋树强
吕雄
贺志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Institute of Computing Technology of CAS
Original Assignee
Lenovo Beijing Ltd
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd, Institute of Computing Technology of CAS filed Critical Lenovo Beijing Ltd
Priority to CN201510680764.4A priority Critical patent/CN105354252A/en
Publication of CN105354252A publication Critical patent/CN105354252A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an information processing method and apparatus, which are used for a mobile device. The information processing method comprises: obtaining first image data by the mobile device; uploading the first image data and first geographic coordinates of the location of the mobile device from the mobile device to a server; and receiving a classification type of the identified first image data from the server, wherein the classification type of the first image data is obtained by performing identification on the first image data uploaded from the mobile device based on the first geographic coordinates of the location of the mobile device. According to the information processing method provided by the invention, image identification of an outdoor scene and an object is performed based on geographical position information; image data are identified according to image data content and geographical information unloaded by a user; and an identification result is returned to a mobile client via a network, so that the user can obtain pragmatic information on the current outdoor scene and object in time, and the user experience is improved.

Description

A kind of information processing method and device
Technical field
The present invention relates to a kind of information processing method and device, more particularly, relating to a kind of information processing method for carrying out Images Classification and device.
Background technology
Current, outdoor travel or a lot of danger and unknown fangle faced by camping, and some thing cannot pass through the direct abstractdesription of language, such as, goes to gather mushrooms and identifies poisonous mushroom, whether the place of camping has the disasteies etc. such as rubble flow.Where existing solution, substantially based on text query, such as can Baidu's search " be suitable for camping ".But because this mode itself has the specifics restriction that language describes, and scene is also difficult to language abstractdesription clear, therefore, under many circumstances based on text query method and be not suitable for using.And, instantly based on Baidu knowledge figure image identification system its mainly lay particular emphasis on plant animal encyclopaedia, and the scene of open air and object itself and geography information have very large relevance, as can be seen water, willow etc. in Beihai park, can see antelope etc. in Qinghai-Tibet Platean.
Summary of the invention
In order to solve above-mentioned technical matters of the prior art, according to an aspect of the present invention, provide a kind of information processing method, for mobile device, described information processing method comprises: obtain the first view data by described mobile device; The first geographic coordinate of described first view data and described mobile device position is uploaded from described mobile device to server; And the class categories of described first view data identified is received from described server, wherein, the class categories of described first view data identifies obtained class categories based on the first geographic coordinate of described mobile device position to the first view data uploaded from described mobile device.
In addition, according to one embodiment of present invention, wherein, described information processing method comprise: use training image to set up the disaggregated model based on geographical location information at described server end, when receiving the first geographic coordinate of the first view data and the described mobile device position of uploading from mobile device, the first geographic coordinate based on described mobile device position is classified to the first view data uploaded from described mobile device, and sends the class categories of described first view data identified to described mobile device.
In addition, according to one embodiment of present invention, wherein, described training image comprises: positive example training image, and described positive example training image is target scene within the scope of the specific region of the geographic coordinate of described mobile device position or other image of object type; And negative routine training image, described negative routine training image be the geographic coordinate of described mobile device position specific region within the scope of except target scene or object other scenes or other image of object type.
In addition, according to one embodiment of present invention, wherein, described the first geographic coordinate based on described mobile device position comprises further to the step that the first view data uploaded from described mobile device is classified: at server end to described the first image data extraction visual signature uploaded; Based on the first geographic coordinate of described mobile device position, search the disaggregated model within the scope of the specific region of described geographic coordinate; The visual signature of disaggregated model to described the first view data uploaded is used to classify; Calculate multiple degree of confidence that described first view data belongs to each classification; And select the maximum classification of degree of confidence to identify obtained class categories as to described first view data.
In addition, according to one embodiment of present invention, wherein, set up in the step based on the disaggregated model of geographical location information at described use training image at described server end, the feature of the training image used comprises visual signature, and described disaggregated model is supporting vector machine model and/or neural network model.
In addition, according to one embodiment of present invention, wherein, the visual signature of training image that described disaggregated model uses comprises: color histogram and/or depth characteristic.
According to a further aspect in the invention, additionally provide a kind of signal conditioning package, for mobile device, described signal conditioning package comprises: shooting unit, is configured for obtaining the first view data by described mobile device; Uploading unit, is configured the first geographic coordinate for uploading described first view data and described mobile device position from described mobile device to server; And receiving element, the class categories of described first view data identified is received from described server, wherein, the class categories of described first view data identifies obtained class categories based on the first geographic coordinate of described mobile device position to the first view data uploaded from described mobile device.
According to a further aspect in the invention, additionally provide a kind of signal conditioning package, for server, described signal conditioning package comprises: unit set up by model, is configured for using training image to set up the disaggregated model based on geographical location information at server end; Taxon, be configured for when receiving the first geographic coordinate of the first view data and the described mobile device position of uploading from mobile device, the first geographic coordinate based on described mobile device position is classified to the first view data uploaded from described mobile device; And transmitting element, the class categories of described first view data identified is sent to described mobile device.
In addition, according to one embodiment of present invention, wherein, described training image comprises: positive example training image, and described positive example training image is target scene within the scope of the specific region of the geographic coordinate of described mobile device position or other image of object type; And negative routine training image, described negative routine training image be the geographic coordinate of described mobile device position specific region within the scope of except target scene or object other scenes or other image of object type.
In addition, according to one embodiment of present invention, wherein, described taxon comprises further: feature extraction unit, be configured at server end to described the first image data extraction visual signature uploaded; Search unit, be configured for the first geographic coordinate based on described mobile device position, search the disaggregated model within the scope of the specific region of described geographic coordinate; Tagsort unit, is configured for using the visual signature of disaggregated model to described the first view data uploaded to classify; Computing unit, is configured the multiple degree of confidence belonging to each classification for calculating described first view data; And recognition unit, the classification be configured for selecting degree of confidence maximum identifies obtained class categories as to described first view data.
In addition, according to one embodiment of present invention, the feature that the training image that unit uses set up by wherein said model comprises visual signature, and described disaggregated model is supporting vector machine model and/or neural network model.
In addition, according to one embodiment of present invention, wherein, the visual signature of training image that described disaggregated model uses comprises: color histogram and/or depth characteristic.
As can be seen here, information processing method provided by the invention and device, can be carried out some to existing information processing method to optimize, to carry out Outdoor Scene and subject image identification based on geographical location information, not only consider the visual information of view data itself, also consider the geographical location information of view data simultaneously.Specifically, the picture data content that information processing method provided by the invention and device can be uploaded according to user and geography information identify automatically for view data, and return recognition result by network to mobile client, and make user obtain practical information about current Outdoor Scene and object in time, improve the experience of user.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical method of the embodiment of the present invention, be briefly described to the accompanying drawing used required in the description of embodiment below.Accompanying drawing in the following describes is only exemplary embodiment of the present invention:
Fig. 1 shows the process flow diagram of the information processing method 100 being applied to mobile device according to the embodiment of the present invention;
Fig. 2 shows the process flow diagram of the information processing method 200 being applied to server according to the embodiment of the present invention;
Fig. 3 shows the exemplary block diagram of the signal conditioning package 300 being applied to mobile device according to the embodiment of the present invention; And
Fig. 4 shows the exemplary block diagram of the signal conditioning package 400 being applied to server according to the embodiment of the present invention.
Embodiment
In order to make the object, technical solutions and advantages of the present invention more obvious, describe in detail below with reference to accompanying drawings according to exemplary embodiment of the present invention.Obviously, described embodiment is only a part of embodiment of the present invention, instead of whole embodiment of the present invention.Should be understood that the present invention not by the restriction of example embodiment described herein.Based on the embodiment of the present invention described in the disclosure, other embodiments all that those skilled in the art obtain when not paying creative work all should fall within protection scope of the present invention.
Below, the preferred embodiments of the present invention are described in detail with reference to accompanying drawing.
Fig. 1 shows the process flow diagram being applied to the information processing method of mobile device according to the embodiment of the present invention, below, with reference to Fig. 1, information processing method 100 according to an embodiment of the invention is described.As shown in Figure 1, first, in step s 110, the first view data can be obtained by described mobile device.In one embodiment of the invention, first view data can be the view data of the whole image taken by the camera of described mobile device, also can be that described mobile device is taken whole image and to be gone forward side by side the view data of the parts of images obtained after row relax, can also be the view data of the shooting image that described mobile device obtains from miscellaneous equipment.
Then, in the step s 120, the first geographic coordinate of described first view data and described mobile device position can be uploaded from described mobile device to server.Such as, in one embodiment of the invention, the longitude of mobile device position and latitude value can be used to be used as the geographic coordinate of mobile device position.
Finally, in step s 130, which, the class categories of described first view data identified can be received from described server, wherein, the class categories of described first view data identifies obtained class categories based on the first geographic coordinate of described mobile device position to the first view data uploaded from described mobile device.
In one embodiment of the invention, after mobile device obtains the first view data, user can select the type of the picture material of the first view data by mobile device, such as, picture material can be selected to belong to object or scene, and when uploading the first geographic coordinate of described first view data and described mobile device position from described mobile device to server, can also selection result from mobile device to server upload user to the type of the picture material of the first view data.Thus, subsequent server carries out category classification efficiency and accuracy to the picture material of the first view data can be improved further.
Below, with reference to Fig. 2, the information processing method of classifying to the view data received from mobile device termination at server end according to the embodiment of the present invention is described.Fig. 2 shows the process flow diagram of the information processing method 200 being applied to server according to the embodiment of the present invention.As shown in Figure 2, in step S210, training image is used to set up the disaggregated model based on geographical location information at described server end.Particularly, in one embodiment of the invention, as previously mentioned, described geography information can be longitude and latitude value.Further, described training image can comprise: positive example training image, and described positive example training image can be target scene within the scope of the specific region of the geographic coordinate of described mobile device position or other image of object type; And negative routine training image, described negative routine training image can be the geographic coordinate of described mobile device position specific region within the scope of except target scene or object other scenes or other image of object type.In one embodiment of the invention, in step S210, when the feature of the training image using training image to use when described server end sets up the disaggregated model based on geographical location information can comprise the visual signature of training image, and the disaggregated model set up can comprise supporting vector machine model and/or neural network model.Particularly, in one example, the visual signature of training image that described disaggregated model uses can be color histogram; In another example, the visual signature of the training image that described disaggregated model uses can be depth characteristic.In one embodiment of the invention, the positive and negative training image of each classification of the scene in certain geographic range or object can be selected, the vision sorter model of each classification of Training scene and object respectively, both the scale having decreased training improves training speed, simultaneously owing to avoiding the classification of scene that too much training and the scene in certain geographic range or object have nothing to do or object, improve the efficiency to the classification of target scene and object and accuracy rate.
Then, in step S220, when receiving the first geographic coordinate of the first view data and the described mobile device position of uploading from mobile device, can classify to the first view data uploaded from described mobile device based on the first geographic coordinate of described mobile device position.Particularly, in one embodiment of the invention, step S220 may further include: at server end to described the first image data extraction visual signature uploaded; Based on the first geographic coordinate of described mobile device position, search the multiple disaggregated models within the scope of the specific region of described geographic coordinate; The visual signature of disaggregated model to described the first view data uploaded is used to classify; Calculate multiple degree of confidence that described first view data belongs to each classification; And select the maximum classification of degree of confidence to identify obtained class categories as to described first view data.
Finally, in step S230, the class categories of described first view data identified can be sent to described mobile device.In one embodiment of the invention, when receiving the first geographic coordinate of described first view data and described mobile device position from described mobile device, the selection result of user to the type of the picture material of the first view data can also be received from mobile device.Thus, when using the visual signature of disaggregated model to described the first view data uploaded to classify, can only use the disaggregated model corresponding with user-selected image type to classify, such as, if user selects the picture material type of the first view data to be object, the disaggregated model corresponding with object type of geographic coordinate near zone then only can be used to classify to the view data that user uploads, thus improve server carries out category classification efficiency and accuracy to the picture material of the first view data further.
Such as, in one embodiment of the invention, when user travels out of doors or camps, encounter a kind of mushroom of the unknown, now, user can use the electronic equipment with camera to be got off by the image taking of this mushroom, end or end of the view data after process being uploaded onto the server after processing the image of this captured mushroom and the image of this mushroom is directly uploaded onto the server by the mobile device can carried with by use.Simultaneously, mobile device can also by built-in GPS (GlobalPositioningSystem, GPS) obtain the latitude and longitude value of the current position of user, and the latitude and longitude value of user's current location is sent to server end together with aforementioned view data.Especially, user can also to belong to after " object " or " scene " select at mobile device end to picture material, selected " object " type or " scene " type are also sent to server end.Such as, in the present embodiment, do not belong to scene because mushroom belongs to object, then user can select " object " type and hold uploading onto the server together with the geographical location information at selection result and view data and user place.
In the present embodiment, the picture such as collected from various picture website can be used in advance to come at the disaggregated model of server end foundation based on geographical location information as training image, for the classification of each object or scene, a corresponding disaggregated model can be set up.Wherein, can according to using the categorical measure and the class scope size that need to arrange object or scene.Such as, in this example, geographical location information can represent in Mount Emei, then now, for " object " type, can setting example as several classifications such as anthropoid cape, mushroom, trees, also can setting example as macaque, orangutan, poisonous mushroom, without several classifications such as poisonous mushroom, pine tree, cypresses; For " scene " type, can setting example as the classification such as the sun, scenery with hills and waters, also sunrise, sunset can be set, several classification such as forest, lake.The picture of the object the Mount Emei collected from each large picture website and scene can be used in advance the positive example training image of each disaggregated model and negative routine training image is used as at server end.Such as, if be provided with macaque, orangutan, poisonous mushroom, without poisonous mushroom, pine tree, the several classification of cypress, when setting up the disaggregated model without poisonous mushroom classification, the positive example training image used can be about all pictures without poisonous mushroom in Mount Emei, its negative routine training image can be in Mount Emei in all objects image except without the picture comprising other object kind of poisonous mushroom except poisonous mushroom.Similarly, for the disaggregated model of poisonous mushroom classification, its positive example training image can be all pictures about the poisonous mushroom in Mount Emei, and its negative routine training image can be the picture of other object kind in Mount Emei in all animals and plants except poisonous mushroom.Then, can extract the characteristics of image of training image, such as visual signature etc., visual signature can comprise color histogram or depth characteristic convolutional neural networks etc.Wherein, the method using color histogram or depth characteristic convolutional neural networks to be known to the skilled person the method that characteristics of image extracts, does not repeat them here.Then, after being extracted characteristics of image, each classification train classification models extracted and to obtain multi-C vector after characteristics of image and come for each object or scene type can be used.Described disaggregated model can comprise support vector cassification model or neural network classification model etc., and the method using multi-C vector to train the method for various disaggregated model to be known to the skilled person, does not repeat them here.
After server end to receive the result that aforesaid view data, the geographical location information at user place and user select image type from the mobile device termination of user, the picture material of disaggregated model to the view data that user uploads of each classification corresponding with the image type that user selects set up in advance can be used to classify.Such as, in the present embodiment, the view data of the mushroom image after server to receive the image of mushroom or process from the mobile device of user, after representing the geographic coordinate of user in Mount Emei and user-selected " object " image type, each sorter in " object " type of the Mount Emei's regional extent set up in advance can be used to classify to this mushroom image, if for each disaggregated model return results the disaggregated model of middle poisonous mushroom to return confidence value maximum, represent that the mushroom in this image is the maximum probability of poisonous mushroom, can be then poisonous mushroom image by this Images Classification.Similarly, if for each disaggregated model return results the middle disaggregated model without poisonous mushroom to return confidence value maximum, the mushroom representing in this image is the maximum probability without poisonous mushroom, then can be nontoxic mushroom image by this Images Classification.Finally, the classification results of identified mushroom image can be sent to mobile device by server.Afterwards, the mobile device of user can from the classification results of received server-side mushroom image.Visible, in the present embodiment, when the query image uploaded for user carries out discriminator, according to user's selective recognition type, such as, " scene " type or " object " type, select corresponding disaggregated model set, simultaneously according to the geographical position coordinates of the view data received, the disaggregated model in certain area coverage is selected to combine, discriminator is carried out to view data, the scale of disaggregated model during identification can be reduced, thus greatly improve speed and the accuracy of classification.
As can be seen here, the information processing method 100 for mobile device that the application of the invention provides and the information processing method 200 for server, can be carried out some to existing information processing method to optimize, to carry out Outdoor Scene and subject image identification based on geographical location information, not only consider the visual information of view data itself, also consider the geographical location information of view data simultaneously, the picture data content uploaded according to user and geography information carry out discriminator for view data, and return classification results by network to mobile client.
In addition, the present invention additionally provides a kind of signal conditioning package 300 being applied to mobile device on the other hand.Below, illustrate that application according to the present invention is in the signal conditioning package of mobile device with reference to Fig. 3.Fig. 3 shows the exemplary block diagram of the signal conditioning package 300 being applied to mobile device according to the embodiment of the present invention, and as shown in Figure 3, signal conditioning package 300 can comprise: shooting unit 310, uploading unit 320 and receiving element 330.
Particularly, shooting unit 310 can be configured and obtain the first view data by described mobile device.In one embodiment of the invention, first view data can be the view data of the whole image taken by the camera of described mobile device, also can be that described mobile device is taken whole image and to be gone forward side by side the view data of the parts of images obtained after row relax, can also be the view data of described mobile device from the shooting image of the acquisition of miscellaneous equipment.
Uploading unit 320, can be configured the first geographic coordinate uploading described first view data and described mobile device position from described mobile device to server.
Receiving element 330, the class categories receiving described first view data identified from described server can be configured, wherein, the class categories of described first view data identifies obtained class categories based on the first geographic coordinate of described mobile device position to the first view data uploaded from described mobile device.
In one embodiment of the invention, after mobile device obtains the first view data, user can also select the type of the picture material of the first view data by mobile device, such as, picture material can be selected to belong to object or scene, and when uploading unit 320 to upload the first geographic coordinate of described first view data and described mobile device position from described mobile device to server, uploading unit 320 can also selection result from mobile device to server upload user to the type of the picture material of the first view data.Thus, subsequent server carries out category classification efficiency and accuracy to the picture material of the first view data can be improved further.
In addition, the present invention additionally provides a kind of signal conditioning package 400 being applied to server on the other hand.Fig. 4 shows the exemplary block diagram of the signal conditioning package 400 being applied to server according to the embodiment of the present invention, and as shown in Figure 4, signal conditioning package 400 can comprise: model sets up unit 410, taxon 420 and transmitting element 430.
Particularly, model is set up unit 410 and can be configured to use training image to set up the disaggregated model based on geographical location information at server end.Particularly, in one embodiment of the invention, as previously mentioned, described geography information can be longitude and latitude value.Further, described training image can comprise: positive example training image, and described positive example training image can be target scene within the scope of the specific region of the geographic coordinate of described mobile device position or other image of object type; And negative routine training image, described negative routine training image can be the geographic coordinate of described mobile device position specific region within the scope of except target scene or object other scenes or other image of object type.In one embodiment of the invention, the feature that the training image that unit 410 uses set up by model comprises visual signature, and described disaggregated model can be supporting vector machine model and/or neural network model.In one example, the visual signature of the training image of described disaggregated model can be color histogram; In another example, the visual signature of the training image that described disaggregated model uses can be depth characteristic.In one embodiment of the invention, the positive and negative training image that unit 410 can select each classification of the scene in certain geographic range or object set up by model, the vision sorter model of each classification of Training scene and object respectively, both the scale having decreased training improves training speed, simultaneously owing to avoiding the classification of scene that too much training and the scene in certain geographic range or object have nothing to do or object, improve the efficiency to the classification of target scene and object and accuracy rate.
Taxon 420, can be configured when receiving the first geographic coordinate of the first view data and the described mobile device position of uploading from mobile device, the first geographic coordinate based on described mobile device position is classified to the first view data uploaded from described mobile device.Particularly, in one embodiment of the invention, taxon 420 may further include: feature extraction unit, be configured at server end to described the first image data extraction visual signature uploaded; Based on the first geographic coordinate of described mobile device position, searching unit, being configured the disaggregated model within the scope of the specific region for searching described geographic coordinate; Tagsort unit, is configured for using the visual signature of disaggregated model to described the first view data uploaded to classify; Computing unit, is configured the multiple degree of confidence belonging to each classification for calculating described first view data; And recognition unit, the classification be configured for selecting degree of confidence maximum identifies obtained class categories as to described first view data.
Transmitting element 430, can be configured the class categories sending described first view data identified to described mobile device.Visible, positive and negative training set is selected in the geographic range that signal conditioning package 400 provided by the invention is certain around each target scene or object, the vision sorter model of Training scene and object respectively, both the scale having decreased training improves training speed, simultaneously owing to avoiding the classification that too much training has nothing to do with target scene or object, improve the recognition accuracy of target scene and object.In one embodiment of the invention, when taxon 420 receives the first geographic coordinate of described first view data and described mobile device position from described mobile device, taxon 420 can also receive the selection result of user to the type of the picture material of the first view data from mobile device.Thus, when taxon 420 uses the visual signature of disaggregated model to described the first view data uploaded to classify, can only use the disaggregated model corresponding with user-selected image type to classify, such as, if user selects the picture material type of the first view data to be object, then taxon 420 can only use the disaggregated model corresponding with object type of geographic coordinate near zone to classify to the view data that user uploads, thus improve server carries out category classification efficiency and accuracy to the picture material of the first view data further.
As can be seen here, the application of the invention provide for the signal conditioning package 300 of mobile device and the signal conditioning package 400 of client server, can be carried out some to existing information processing method to optimize, to carry out Outdoor Scene and subject image identification based on geographical location information, not only consider the visual information of view data itself, also consider the geographical location information of view data simultaneously, the picture data content can uploaded according to user and geography information carry out discriminator for view data, and return classification results by network to mobile client, improve the efficiency to the classification of target scene and object and accuracy rate.
It should be noted that, in this manual, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Finally, also it should be noted that, above-mentioned a series of process not only comprises with the order described here temporally process that performs of sequence, and comprises process that is parallel or that perform respectively instead of in chronological order.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add required hardware platform by software and realize, and can certainly all be implemented by hardware.Based on such understanding, what technical scheme of the present invention contributed to background technology can embody with the form of software product in whole or in part, this computer software product can be stored in storage medium, as ROM/RAM, magnetic disc, CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the present invention or embodiment.
Above to invention has been detailed introduction, applying specific case herein and setting forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (12)

1. an information processing method, for mobile device, described information processing method comprises:
The first view data is obtained by described mobile device;
The first geographic coordinate of described first view data and described mobile device position is uploaded from described mobile device to server; And
The class categories of described first view data identified is received from described server,
Wherein, the class categories of described first view data is the class categories that obtains after carrying out identifying to the first view data uploaded from described mobile device based on the first geographic coordinate of described mobile device position.
2. an information processing method, for server, described information processing method comprises:
Training image is used to set up the disaggregated model based on geographical location information at described server end;
When receiving the first geographic coordinate of the first view data and the described mobile device position of uploading from mobile device, the first geographic coordinate based on described mobile device position is classified to the first view data uploaded from described mobile device; And
The class categories of described first view data identified is sent to described mobile device.
3. information processing method as claimed in claim 2, wherein, described training image comprises:
Positive example training image, described positive example training image is target scene within the scope of the specific region of the geographic coordinate of described mobile device position or other image of object type; And
Negative routine training image, described negative routine training image be the geographic coordinate of described mobile device position specific region within the scope of except target scene or object other scenes or other image of object type.
4. information processing method as claimed in claim 3, wherein, described the first geographic coordinate based on described mobile device position comprises further to the step that the first view data uploaded from described mobile device is classified:
At server end to described the first image data extraction visual signature uploaded;
Based on the first geographic coordinate of described mobile device position, search the disaggregated model within the scope of the specific region of described geographic coordinate;
The visual signature of disaggregated model to described the first view data uploaded is used to classify;
Calculate multiple degree of confidence that described first view data belongs to each classification; And
The maximum classification of degree of confidence is selected to identify obtained class categories as to described first view data.
5. information processing method as claimed in claim 3, wherein, set up in the step based on the disaggregated model of geographical location information at described use training image at described server end, the feature of the training image used comprises visual signature, and described disaggregated model is supporting vector machine model and/or neural network model.
6. information processing method as claimed in claim 5, wherein, the visual signature of the training image of described disaggregated model comprises: color histogram and/or depth characteristic.
7. a signal conditioning package, for mobile device, described signal conditioning package comprises:
Shooting unit, is configured for obtaining the first view data by described mobile device;
Uploading unit, is configured the first geographic coordinate for uploading described first view data and described mobile device position from described mobile device to server; And
Receiving element, receives the class categories of described first view data identified from described server,
Wherein, the class categories of described first view data identifies obtained class categories based on the first geographic coordinate of described mobile device position to the first view data uploaded from described mobile device.
8. a signal conditioning package, for server, described signal conditioning package comprises:
Unit set up by model, is configured for using training image to set up the disaggregated model based on geographical location information at server end;
Taxon, be configured for when receiving the first geographic coordinate of the first view data and the described mobile device position of uploading from mobile device, the first geographic coordinate based on described mobile device position is classified to the first view data uploaded from described mobile device; And
Transmitting element, sends the class categories of described first view data identified to described mobile device.
9. signal conditioning package as claimed in claim 8, wherein, described training image comprises:
Positive example training image, described positive example training image is target scene within the scope of the specific region of the geographic coordinate of described mobile device position or other image of object type; And
Negative routine training image, described negative routine training image be the geographic coordinate of described mobile device position specific region within the scope of except target scene or object other scenes or other image of object type.
10. signal conditioning package as claimed in claim 9, wherein, described taxon comprises further:
Feature extraction unit, be configured at server end to described the first image data extraction visual signature uploaded;
Search unit, be configured for the first geographic coordinate based on described mobile device position, search the disaggregated model within the scope of the specific region of described geographic coordinate;
Tagsort unit, is configured for using the visual signature of disaggregated model to described the first view data uploaded to classify;
Computing unit, is configured the multiple degree of confidence belonging to each classification for calculating described first view data; And
Recognition unit, the classification be configured for selecting degree of confidence maximum identifies obtained class categories as to described first view data.
11. signal conditioning packages as claimed in claim 10, wherein, the feature that the training image that unit uses set up by described model comprises visual signature, and described disaggregated model is supporting vector machine model and/or neural network model.
12. signal conditioning packages as claimed in claim 11, wherein, the visual signature of the training image of described disaggregated model comprises: color histogram and/or depth characteristic.
CN201510680764.4A 2015-10-19 2015-10-19 Information processing method and apparatus Pending CN105354252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510680764.4A CN105354252A (en) 2015-10-19 2015-10-19 Information processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510680764.4A CN105354252A (en) 2015-10-19 2015-10-19 Information processing method and apparatus

Publications (1)

Publication Number Publication Date
CN105354252A true CN105354252A (en) 2016-02-24

Family

ID=55330225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510680764.4A Pending CN105354252A (en) 2015-10-19 2015-10-19 Information processing method and apparatus

Country Status (1)

Country Link
CN (1) CN105354252A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169065A (en) * 2016-06-30 2016-11-30 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN106228193A (en) * 2016-07-29 2016-12-14 北京小米移动软件有限公司 Image classification method and device
CN108921040A (en) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN109034380A (en) * 2018-06-08 2018-12-18 四川斐讯信息技术有限公司 A kind of distributed image identification system and its method
CN109271899A (en) * 2018-08-31 2019-01-25 朱钢 A kind of implementation method improving Ai wisdom photography scene recognition accuracy
CN109447150A (en) * 2018-10-26 2019-03-08 杭州睿琪软件有限公司 A kind of plants ' aesthetics method, apparatus, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090190797A1 (en) * 2008-01-30 2009-07-30 Mcintyre Dale F Recognizing image environment from image and position
US20090257663A1 (en) * 2008-04-14 2009-10-15 Jiebo Luo Image classification using capture-location-sequence information
CN102682091A (en) * 2012-04-25 2012-09-19 腾讯科技(深圳)有限公司 Cloud-service-based visual search method and cloud-service-based visual search system
CN102880879A (en) * 2012-08-16 2013-01-16 北京理工大学 Distributed processing and support vector machine (SVM) classifier-based outdoor massive object recognition method and system
CN103310466A (en) * 2013-06-28 2013-09-18 安科智慧城市技术(中国)有限公司 Single target tracking method and achievement device thereof
CN104036235A (en) * 2014-05-27 2014-09-10 同济大学 Plant species identification method based on leaf HOG features and intelligent terminal platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090190797A1 (en) * 2008-01-30 2009-07-30 Mcintyre Dale F Recognizing image environment from image and position
US20090257663A1 (en) * 2008-04-14 2009-10-15 Jiebo Luo Image classification using capture-location-sequence information
CN102682091A (en) * 2012-04-25 2012-09-19 腾讯科技(深圳)有限公司 Cloud-service-based visual search method and cloud-service-based visual search system
CN102880879A (en) * 2012-08-16 2013-01-16 北京理工大学 Distributed processing and support vector machine (SVM) classifier-based outdoor massive object recognition method and system
CN103310466A (en) * 2013-06-28 2013-09-18 安科智慧城市技术(中国)有限公司 Single target tracking method and achievement device thereof
CN104036235A (en) * 2014-05-27 2014-09-10 同济大学 Plant species identification method based on leaf HOG features and intelligent terminal platform

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169065A (en) * 2016-06-30 2016-11-30 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN106169065B (en) * 2016-06-30 2019-12-24 联想(北京)有限公司 Information processing method and electronic equipment
CN106228193A (en) * 2016-07-29 2016-12-14 北京小米移动软件有限公司 Image classification method and device
CN106228193B (en) * 2016-07-29 2019-08-06 北京小米移动软件有限公司 Image classification method and device
CN108921040A (en) * 2018-06-08 2018-11-30 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN109034380A (en) * 2018-06-08 2018-12-18 四川斐讯信息技术有限公司 A kind of distributed image identification system and its method
CN109271899A (en) * 2018-08-31 2019-01-25 朱钢 A kind of implementation method improving Ai wisdom photography scene recognition accuracy
CN109447150A (en) * 2018-10-26 2019-03-08 杭州睿琪软件有限公司 A kind of plants ' aesthetics method, apparatus, electronic equipment and storage medium
CN109447150B (en) * 2018-10-26 2020-12-18 杭州睿琪软件有限公司 Plant viewing method, plant viewing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105354252A (en) Information processing method and apparatus
CN107481327B (en) About the processing method of augmented reality scene, device, terminal device and system
CN108885698B (en) Face recognition method and device and server
US20220358742A1 (en) Insect identification method and system
EP3188081B1 (en) Data processing method and device
CN107291888B (en) Machine learning statistical model-based living recommendation system method near living hotel
CN101911098B (en) Recognizing image environment from image and position
CN103631819B (en) A kind of method and system of picture name
CN107993191A (en) A kind of image processing method and device
US20210027061A1 (en) Method and system for object identification
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
CN110162643B (en) Electronic album report generation method, device and storage medium
IL255797A (en) Methods and systems of providing visual content editing functions
CN104331509A (en) Picture managing method and device
WO2020098121A1 (en) Method and device for training fast model, computer apparatus, and storage medium
CN105912650B (en) Method and device for recommending songs
CN108897757B (en) Photo storage method, storage medium and server
CN106844492A (en) A kind of method of recognition of face, client, server and system
CN111161265A (en) Animal counting and image processing method and device
CN108734146A (en) Facial image Age estimation method, apparatus, computer equipment and storage medium
CN107578003B (en) Remote sensing image transfer learning method based on geographic marking image
Kalantar et al. Smart counting–oil palm tree inventory with UAV
CN107977392B (en) Method, device and system for identifying picture book and electronic equipment
CN106095830A (en) A kind of image geo-positioning system based on convolutional neural networks and method
US20140359015A1 (en) Photo and video sharing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160224