CN110168530A - Electronic equipment and the method for operating the electronic equipment - Google Patents

Electronic equipment and the method for operating the electronic equipment Download PDF

Info

Publication number
CN110168530A
CN110168530A CN201880005869.1A CN201880005869A CN110168530A CN 110168530 A CN110168530 A CN 110168530A CN 201880005869 A CN201880005869 A CN 201880005869A CN 110168530 A CN110168530 A CN 110168530A
Authority
CN
China
Prior art keywords
model
image
electronic equipment
multiple images
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880005869.1A
Other languages
Chinese (zh)
Other versions
CN110168530B (en
Inventor
姜诚珉
翰兴宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2018/000069 external-priority patent/WO2018128362A1/en
Publication of CN110168530A publication Critical patent/CN110168530A/en
Application granted granted Critical
Publication of CN110168530B publication Critical patent/CN110168530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A kind of electronic equipment includes processor, wherein, processor is configured as: obtaining multiple images, the depth characteristic about described multiple images is extracted using Feature Selection Model, described multiple images are categorized into particular demographic using the depth characteristic and disaggregated model of extraction, the result of classification is shown over the display, it determines whether the Feature Selection Model and/or the disaggregated model need to be updated using the result of the classification, and trains based on the definitive result and update at least one model in the Feature Selection Model and the disaggregated model.The depth characteristic of rule-based algorithm or artificial intelligence (AI) algorithm evaluation image can be used in the electronic equipment.When using the depth characteristic of AI algorithm evaluation image, which can be used machine learning, neural network or deep learning algorithm etc..

Description

Electronic equipment and the method for operating the electronic equipment
Technical field
The disclosure relates generally to electronic equipments and the method for operating the electronic equipment, for example, being related to one kind can will be more A image classification at particular demographic or by specific keyword distribute to particular demographic electronic equipment and a kind of operation should The method of electronic equipment.
In addition, this disclosure relates to providing the artificial intelligence of identification and decision using machine learning algorithm (such as deep learning) (AI) system and its application.
Background technique
With the development of information and communication technology (ICT) and semiconductor technology, various electronic equipments have been developed as providing each The multimedia equipment of kind multimedia service.For example, electronic equipment provides various multimedia services, and such as, messenger service, broadcast clothes Business, wireless-internet services, camera service and reproducing music service.
In addition, electronic equipment provides the function to image into classification and search.Electronic equipment can be by using preset point Class standard by the image classification of user at particular demographic, but the classification standard due to using setting, therefore cannot provide for user The classification results of optimization.
In addition, electronic equipment can store the image of user together with keyword, and provide using the keyword Picture search function.However, during the picture search using keyword, be only able to find according to the keyword distributed by user and The image of storage, therefore user must not inaccurately remember keyword corresponding with the image for it is expected to be found.
In addition, recently, artificial intelligence (AI) system is introduced in field of image processing.
AI system is the system different from existing rule-based intelligence system, and within the system, machine carries out self Learn, make a decision and becomes intelligence.The use of AI system is more, and discrimination is higher and can more accurately understand user Hobby, therefore the AI system based on deep learning gradually substitutes existing rule-based intelligence system.
AI technology includes machine learning (deep learning) and the element technology using machine learning.
Machine learning is to carry out the feature of input data from the algorithmic technique classified and learnt.Element technology is for making With the technology for the function that the simulation of the machine learning algorithm of such as deep learning is such as identified and determined, and including language understanding, The technical field that visual analysis, reasoning/prediction, the representation of knowledge or operation control.
It can be as follows using the various fields of AI technology.Language understanding be it is a kind of identify people language/character and application/place The technology of the language/character is managed, and including natural language processing, machine translation, conversational system, question and answer or voice Identification/synthesis.Visual analysis is a kind of technology for identifying object, and including Object identifying, object tracing, picture search, people Identification, scene understanding, space understanding, image enhancement.Reasoning/prediction is a kind of by determining information come logically reasoning and prediction The technology of information, and including the reasoning of knowledge based/probability, optimization prediction, plan or recommendation based on preference.Knowledge table Show it is a kind of technology that posterior infromation is automated as to knowledge data, and including knowledge structuring (data generation/classification) or knows Know management (data application).Operation control is a kind of technology of the movement of automatic Pilot or robot for controlling vehicle, and wraps Include motion control (navigate, avoid collision, driving) or manipulation and control (behaviour control).
Summary of the invention
Solution
The feature of image can be extracted, be based on extracted feature by image classification and based on extracted feature by providing The method for searching for the electronic equipment and the operation electronic equipment of similar image.
Detailed description of the invention
By the detailed description below in conjunction with attached drawing, these and or other aspects, feature and the adjoint advantage of the disclosure It will become apparent and be easier to understand, wherein identical drawing reference numeral indicates identical element, in which:
Fig. 1 is the exemplary method by image classification executed by electronic equipment shown according to an example embodiment of the present disclosure Diagram;
Fig. 2 is the process for showing the exemplary method operated to electronic equipment according to an example embodiment of the present disclosure Figure;
Fig. 3 A, Fig. 3 B and Fig. 3 C are the examples for showing the depth characteristic of extraction image according to an example embodiment of the present disclosure The diagram of method;
Fig. 4 be show according to an example embodiment of the present disclosure by electronic equipment use based on general data training feature Extract the diagram for the example results that model and disaggregated model classify multiple images;
Fig. 5 be show according to an example embodiment of the present disclosure by electronic equipment using the Feature Selection Model that is updated and The diagram for the example results that classification mode classifies multiple images;
Fig. 6 A and Fig. 6 B are that the use shown according to an example embodiment of the present disclosure is mentioned based on the feature of user data training The diagram for the exemplary method that modulus type and disaggregated model classify multiple images;
Fig. 7 A is the flow chart for showing the exemplary operations of server and electronic equipment according to an example embodiment of the present disclosure;
Fig. 7 B be show according to an example embodiment of the present disclosure to server, first processor and second processor carry out The flow chart of the exemplary method of operation;
Fig. 7 C be show according to an example embodiment of the present disclosure to server, first processor, second processor and The flow chart for the exemplary method that three processors are operated;
Fig. 8 A is the process for showing the exemplary method operated to electronic equipment according to an example embodiment of the present disclosure Figure;
Fig. 8 B be show according to an example embodiment of the present disclosure to the first processor for including in the electronic device and The flow chart for the exemplary method that two processors are operated;
Fig. 8 C be show according to an example embodiment of the present disclosure to the first processor for including in the electronic device, second The flow chart for the exemplary method that processor and third processor are operated;
Fig. 9 and Figure 10 is the example for showing the search image executed by electronic equipment according to an example embodiment of the present disclosure The diagram of method;
Figure 11 is the block diagram for showing the example arrangement of electronic equipment according to an example embodiment of the present disclosure;
Figure 12 is the block diagram for showing example processor according to an example embodiment of the present disclosure;
Figure 13 is the block diagram for showing sample data unit according to an example embodiment of the present disclosure;
Figure 14 is the block diagram for showing sample data taxon according to an example embodiment of the present disclosure;
Figure 15 is that the electronic equipment for showing according to an example embodiment of the present disclosure is worked in coordination with server and learnt together simultaneously Identify the exemplary diagram of data;
Figure 16 is the block diagram for showing the example arrangement of electronic equipment of another example embodiment according to the disclosure.
Preferred forms of the invention
The feature of image can be extracted, described image is classified based on extracted feature and is based on being extracted by providing Signature search similar image electronic equipment and the operation electronic equipment method.
The electronics for being suitable for the keyword of feature of image can be distributed in situation not set by the user by, which providing, sets Standby and the operation electronic equipment method.
Other exemplary aspects will be partly articulated in the following description, and partly will be aobvious easy by the description See.
An exemplary aspect according to example embodiment, a kind of electronic equipment include: display;Memory is configured as depositing Store up at least one instruction;And processor, it is configured as running at least one described instruction stored in memory to promote Electronic equipment performs the following operation: obtaining multiple images, extracts the depth about described multiple images using Feature Selection Model Described multiple images are categorized into particular demographic using the depth characteristic and disaggregated model of extraction, shown over the display by feature Classification as a result, determining whether the Feature Selection Model and the disaggregated model need by more using the result of the classification Newly, it and trains based on the definitive result and updates at least one mould in the Feature Selection Model and the disaggregated model Type.
According to an exemplary aspect of another example embodiment, a kind of method operating electronic equipment includes: to obtain multiple figures Picture;The depth characteristic about described multiple images is extracted using Feature Selection Model;Using extraction the depth characteristic and point Described multiple images are categorized into particular demographic and show the result of classification by class model;Institute is determined using the result of the classification It states Feature Selection Model and whether the disaggregated model needs to be updated;And it trains based on the definitive result and updates the spy Sign extracts at least one model in model and the disaggregated model.
Specific embodiment
All terms used herein including descriptive term or technical term should be understood that have to this field The apparent meaning of those of ordinary skill.
However, the term can have according to the appearance of the intention of those skilled in the art, precedent or new technology Different meanings.In addition, some terms can be arbitrarily selected, and in this case, it in the disclosure will be to the art of selection The meaning of language is described.Therefore, term used herein must be based on the meaning of term and the description quilt through the disclosure Definition.
In addition, when component " include " or " contain " element, unless in the presence of specific description in contrast to this, otherwise component is also It may include other elements and be not excluded for other elements.In the following description, such as " unit " and the term of " module " instruction are used for Handle the unit of at least one functions or operations, wherein unit and block can be implemented as hardware (for example, circuit), firmware or soft Part can be implemented by any combination of hardware, firmware or software.
It is described more fully now with reference to various example embodiments of the attached drawing to the disclosure.However, the disclosure is shown Example embodiment can be embodied in many different forms, and be not intended to be limited to embodiments set forth herein;On the contrary Ground theses embodiments are provided so that the disclosure will be thorough and complete, and will be abundant to those skilled in the art Convey the design of the example embodiment of the disclosure in ground.In the following description, if since well known function or construction will be with not Necessary details obscures the example embodiment of the disclosure, therefore well known function or construction may not be described in detail, and be passed through This specification is worn, the identical drawing reference numeral in attached drawing indicates the same or similar element.
As used herein, term "and/or" includes that any and whole of one or more of item is listed in association Combination.When such as at least one of " ... " is when being expressed in after a column element, the statement modify permutation element rather than Modify the individual element in the column.
Fig. 1 is the example by image classification executed by electronic equipment 100 shown according to an example embodiment of the present disclosure The diagram of method.
Electronic equipment 100 according to the embodiment can in a variety of manners in any one form be implemented.Such as but not It is limited to, it can be with such as mobile phone, smart phone, laptop computer, desktop PC, tablet personal computer (PC), e-book terminal, digital broadcast terminal, personal digital assistant (PDA), portable media player (PMP), navigation dress It sets, MP3 player, digital camera, video camera, Internet protocol television (IPTV), DTV (DTV) and wearable device Any one in the various electronic equipments of (for example, smartwatch or intelligent glasses) etc. realizes electronic equipment 100, but unlimited In this.
In embodiment of the disclosure, term " user " can refer to the functions or operations of such as controlling electronic devices people and It can be viewer, manager or installation engineer.
Electronic equipment 100 according to the embodiment can get multiple images 10.Described multiple images 10 may include using electronics Image that equipment 100 captures, the image that is stored in electronic equipment 100 or from the received image of external equipment.
The depth that characteristic extracting module 20 can be used to extract described multiple images 10 for electronic equipment 100 according to the embodiment is special Sign.
Characteristic extracting module 20 can be such as but not limited to model neural network based.For example, such as depth nerve The model of network (DNN), recurrent neural network (RNN) or forward-backward recutrnce deep neural network (BRDNN) is used as feature and mentions Modulus type 20 etc., but not limited to this.
In addition, Feature Selection Model 20 initially can be the model based on general data training.The depth characteristic of image can Including at least one for including from least one described neural network and inputting an image at least one neural network The vector that layer extracts.
Vector may include the feature of such as image.The feature of image may include the shape for the object for for example including in the picture Or the place of type or capture image.
It therefore, can be by the feature of image and image (for example, the shape of object, the type of object, scene Recognition result or catching Catch place) carry out training characteristics extraction model 20 as learning data.For example, can by by the type of the image of puppy, puppy and It captures place and carrys out training characteristics extraction model 20 as learning data.In addition, can be by that will include in the image of night scene, night scene The title of building and capture place are used as learning data and carry out training characteristics extraction model 20.Therefore, when image is entered, electronics Equipment 100 can extract the depth characteristic of such feature including image by using Feature Selection Model 20.
When the depth characteristic of described multiple images 10 is extracted, electronic equipment 100 according to the embodiment, which can be used, to be extracted Depth characteristic and disaggregated model 30 described multiple images 10 are classified.
Disaggregated model 30 can be such as but not limited to model neural network based.For example, such as DNN, RNN or The model of BRDNN etc. is used as disaggregated model 30, but not limited to this.
In addition, disaggregated model 30 initially can be the model based on general data training.For example, can by image, from image The depth characteristic and image classification result of extraction are used as learning data and carry out train classification models 30.For example, can be by by puppy Image, the depth characteristic (for example, shape or type of puppy) of image and image tabulation (for example, puppy, beasle dog or Poodle) be used as learning data carry out train classification models 30.In addition, can by by the image of night scene, from the depth of image zooming-out The classification (for example, landscape or night scene) of feature (for example, title of position or building that night scene is imaged) and image is used as study Data carry out train classification models 30.
Therefore, disaggregated model 30 can will be described more based on the similarity between the depth characteristic about described multiple images 10 A image 10 is categorized into particular demographic.Here, such as can be referred to by the distance between vector for being extracted as depth characteristic Show the similarity between depth characteristic.For example, when the display vector on coordinate system, when between the vector shown on coordinate system Distance in short-term similarity can for height, when the distance between vector is long, similarity can be low.However, embodiment is not limited to This.
Disaggregated model 30 according to the embodiment can be by image classification corresponding with the vector within preset distance range At a group.For example, the image shown in the first area of coordinates regional is (for example, refer among described multiple images 10 Show the image of the feature of " food ") it can be classified into the first group 41, the image shown in the second area is (for example, instruction " baby The image of the feature of youngster ") it can be classified into the second group 42, the image shown in third region is (for example, the spy of instruction " tower " The image of sign) it is displayed in third group 43.
In addition, according to various embodiments, disaggregated model 30 can will include the shape of puppy (such as beasle dog or Poodle) The distance between vector be designated as it is short.In addition, disaggregated model 30 can by indicate between landscape, night scene and the vector of building away from It is short from being designated as.In addition, disaggregated model 30 can be by the distance between vector including shape or puppy and including landscape and night scene The distance between vector be designated as growing.However, various example embodiments are without being limited thereto.
Feature Selection Model 20 and disaggregated model 30 according to example embodiment can be configured in a neural network simultaneously And it can be configured independently in different neural networks.
Feature Selection Model 20 and disaggregated model 30 according to example embodiment can be determined existing by using the result of classification Whether some Feature Selection Models and existing disaggregated model need to be updated.When determining that they need to be updated, feature is mentioned Modulus type 20 and disaggregated model 30 can be by using user data by re -training.
For example, Feature Selection Model 20 can be input to via the described multiple images 10 for being classified into particular demographic are used as With the input value of at least one model in disaggregated model 30 exercise supervision formula study come training characteristics extract model 20 and classification At least one model in model 30.When via supervised learning training Feature Selection Model 20 and disaggregated model 30, it can make With the multiple images generated by user and the keyword map information of user corresponding with each of described multiple images (for example, feature of image input by user) carrys out training characteristics and extracts model 20 and disaggregated model 30.
It can learn at least one model come in training characteristics extraction model 20 and disaggregated model 30 via non-supervisory formula, In, in the non-supervisory formula study, re -training Image Classifier in the case where no any specific supervision, and find Image classification standard relevant to the learning outcome of language model.In addition, by by the learning outcome of language model and via prison The learning outcome association of formula study is superintended and directed, it can be in the case where no supervision by finding image classification standard come training characteristics extraction At least one model in model 20 and disaggregated model 30.
In addition, can carry out training characteristics via intensified learning extracts model 20 and disaggregated model 30, wherein intensified learning uses Whether instruction correctly feeds back according to the classification results of the image of study.However, embodiment is without being limited thereto.
Fig. 2 is the stream for showing the exemplary method operated to electronic equipment 100 according to an example embodiment of the present disclosure Cheng Tu.
Referring to Fig. 2, in operation S210, electronic equipment 100 according to the embodiment can get multiple images.
In operation S220, Feature Selection Model can be used to extract about the multiple for electronic equipment 100 according to the embodiment The depth characteristic of image.
Feature Selection Model can be model neural network based.For example, the model of DNN, RNN or BRDNN etc. It is used as Feature Selection Model, but not limited to this.The depth characteristic of image may include by inputting an image at least one Neural network and from include at least one of at least one neural network layer extract vector.
In addition, the depth characteristic of image can be stored with the metadata form of such as EXIF.In addition, working as the lattice of image file When formula is not JPEG, image file can be converted into JPEG and the depth characteristic of image can be stored with EXIF.However, real It is without being limited thereto to apply example.
In addition, electronic equipment 100 can make the metadata that the depth characteristic of image is stored as image whenever image is divided The depth characteristic is not extracted again when class, and by using the depth characteristic of storage by image classification.Therefore, it can be improved Image classification speed.
In addition, when the depth characteristic of image is stored with EXIF, even if when image file is stored in another electronic equipment When middle, the information about depth characteristic can also be kept.For example, even if when image file is sent to external equipment rather than electric When sub- equipment 100, the information of the depth characteristic about image can also be maintained in external equipment.Therefore, another equipment can lead to It crosses image classification or search image using the information about depth characteristic, and when Feature Selection Model and disaggregated model are distinguished When being stored in the first electronic equipment and the second electronic equipment, the second electronic equipment can be mentioned by using by the first electronic equipment The depth characteristic of the image taken is by image classification.
Operation S230, electronic equipment 100 according to the embodiment can based on about described multiple images depth characteristic and Disaggregated model classifies described multiple images.
Disaggregated model can be model neural network based.For example, the model of DNN, RNN or BRDNN etc. can quilts As disaggregated model, but not limited to this.
In addition, disaggregated model initially can be the model based on general data training.Disaggregated model can be based on about described Described multiple images are categorized into particular demographic by the similarity between the depth characteristic of multiple images.It here, can be by being extracted The distance between vector as depth characteristic carrys out the similarity between indicated depth feature.
In addition, electronic equipment 100 according to the embodiment can show the result that described multiple images are classified.
In operation S235, it is true that the result by described multiple images classification can be used for electronic equipment 100 according to the embodiment Determine Feature Selection Model and whether disaggregated model needs to be updated.
For example, electronic equipment 100 can be based on the quantity for including image in the group that described multiple images are classified into Balance determine whether Feature Selection Model and/or disaggregated model need to be updated.When based on described multiple images are classified As a result, image is only included in particular demographic, while being included in other groups without image or quantity is less than default When the image of quantity is included in other groups, electronic equipment 100 can determine Feature Selection Model and/or disaggregated model needs It is updated.On the other hand, when based on by described multiple images classify as a result, the image for including in particular demographic is equal to or more When preset quantity, it is to be updated that electronic equipment 100 can determine that Feature Selection Model and/or disaggregated model are not required to.However, showing Example embodiment is without being limited thereto, and the necessity for updating Feature Selection Model and disaggregated model can be determined based on various standards.
When determining that Feature Selection Model and/or disaggregated model need to be updated, in operation S240, electricity according to the embodiment Sub- equipment 100 can train and update at least one model in Feature Selection Model and disaggregated model.
Electronic equipment 100 according to the embodiment can be by using user data (for example, multiple user images) re -training And update at least one model in Feature Selection Model and disaggregated model.Since user data be used to update feature extraction mould Type and disaggregated model, therefore Feature Selection Model and disaggregated model can be updated to be suitable for user data.
Electronic equipment 100 according to the embodiment can update Feature Selection Model periodically or when there are user's request With at least one model in disaggregated model.When electronic equipment 100 is in preset state, Feature Selection Model and disaggregated model In at least one model can be updated.For example, when electronic equipment 100 enters standby mode or is in charged state, or When electronic equipment 100 is connected to Wi-Fi network, at least one model in Feature Selection Model and disaggregated model can be by more Newly.However, embodiment is without being limited thereto.
The depth characteristic of the extractable described multiple images obtained of electronic equipment 100 according to the embodiment, and by making Described multiple images are classified with the Feature Selection Model and disaggregated model that are updated via study.In addition, electronic equipment 100 The depth characteristic for the described multiple images classified in advance can be extracted again.Therefore, it may be updated or add about in advance classifying The information of the depth characteristic of described multiple images.In addition, electronic equipment 100 can will be described more based on the depth characteristic extracted again A image is reclassified.
Fig. 3 A, Fig. 3 B and Fig. 3 C are the examples for showing the depth characteristic of extraction image according to an example embodiment of the present disclosure Method with reference to figure.
The depth characteristic of image according to the embodiment may include such as, but not limited to by inputting an image at least one Neural network and from include at least one of at least one neural network layer extract vector.
Referring to Fig. 3 A, electronic equipment 100 according to the embodiment can be by being input to different types of nerve for image Network extracts multiple depth characteristics about image.For example, image can be input to first nerves network 301 with refreshing from first N-th layer through network 301 extracts the first depth characteristic, and image can be input to nervus opticus network 302 with from nervus opticus net The n-th layer of network 302 extracts the second depth characteristic, and image can be input to third nerve network 303 with from third nerve net The n-th layer of network 303 extracts third depth characteristic.
In addition, referring to Fig. 3 B, electronic equipment 100 according to the embodiment can by input an image into a neural network come From including that different sub-networks in one neural network extract multiple depth characteristics about image.For example, image The neural network including the first sub-network 304 and the second sub-network 305 be can be input to from the n-th layer of the first sub-network 304 It extracts the first depth characteristic and extracts the second depth characteristic from the n-th layer of the second sub-network 305.
In addition, electronic equipment 100 according to the embodiment can be by inputting an image into a neural network referring to Fig. 3 C To extract multiple depth characteristics about image from different layers.For example, image can be input to one neural network with The first depth characteristic is extracted from the n-th layer of one neural network and from m the layer of one neural network extraction the Two depth characteristics.
Electronic equipment 100 can be by the depth characteristic of extraction and the information about the neural network for extracting depth characteristic, nerve The layer information and sub-network information of network store together.Electronic equipment 100 according to the embodiment can be by using from a nerve The depth characteristic that network extracts is by multiple images classification or searches for image.
Fig. 4 be show according to an example embodiment of the present disclosure by electronic equipment 100 using based on general data training The diagram for the example results that Feature Selection Model and disaggregated model classify multiple images.
Referring to Fig. 4, electronic equipment 100 according to the embodiment can get multiple images.For example, as shown in Figure 4, electronics is set Standby 100 obtain the first image to the tenth image A1, A2, A3, A4, A5, A6, A7, A8, A9 and A10.Electronic equipment 100 can be used The first image A1 to the tenth image A10 of acquisition is categorized into particular demographic by Feature Selection Model and disaggregated model.Here, feature It extracts model and disaggregated model can be the model trained in advance based on general data.For example, Feature Selection Model and classification mould Type, which can be, to be trained to so that multiple images are categorized into six kinds of classifications (for example, " people ", " dog ", " landscape ", " document ", " food " " road ") model, but not limited to this.For example, the type and quantity of classification can pass through Feature Selection Model and disaggregated model It is determined or can be set based on user's input via study.
When electronic equipment 100 is classified the first image A1 to the tenth image A10 using Feature Selection Model and disaggregated model When, the first image A1 to the 5th image A5, the 7th image A7 and the 8th image A8 and the tenth image A10 can be classified into " dog " Classification, and the 6th image A6 and the 9th image A9 can be classified into " people " classification.When according to the spy based on general data training When sign extraction model and disaggregated model classify the first image A1 to the tenth image A10, only " dog " and " people " classification are used, " landscape ", " food ", " road " and " document " classification is not used by.Therefore, when user does not use and " landscape ", " food ", " road Road " and " document " relevant image and mainly using only to " dog " and " people " relevant image when, be perceived by the user about The reduced performance of Feature Selection Model and disaggregated model based on general data training.
Fig. 5 is the feature extraction mould being updated by the use of electronic equipment 100 shown according to an example embodiment of the present disclosure The diagram for the example results that type and disaggregated model classify multiple images.
Electronic equipment 100 according to the embodiment can be used user images training and update Feature Selection Model and disaggregated model In at least one model.For example, the knot that the first image A1 of Fig. 4 classifies to the tenth image A10 can be used in electronic equipment 100 Fruit updates the necessity of Feature Selection Model and disaggregated model to determine.In addition, electronic equipment 100 can be by using user images It trains and updates at least one model in Feature Selection Model and disaggregated model.
Referring to Fig. 5, the disaggregated model of the Feature Selection Model updated and update that can be used will classify in advance for electronic equipment 100 Image reclassify.For example, when the disaggregated model of the Feature Selection Model and update that use update will such as the first image A1 To the tenth image A10 described multiple images classification when, the first image A1, the second image A2 and the 8th image A8 can be classified into First group, third image A3 and the 4th image A4 can be classified into the second group, and the 6th image A6 and the 9th image A9 can quilts It is categorized into third group, the 5th image A5, the 7th image A7 and the tenth image A10 can be classified into the 4th group, the 6th image A6 can be classified into the 5th group to the 8th image A8 and the tenth image A10, and the first image A1 is to the 4th image A4, the 7th Image A7 and the 9th image A9 can be classified into the 6th group.Here, the first image A1 to the tenth image A10 can overlappingly be divided Class is at the first group to the 6th group.
Compared with the result of the classification of Fig. 4, the first image A1 to the tenth image A10 is classified into two groups in Fig. 4 Group, but six groups are classified into Fig. 5.Compared with Feature Selection Model and disaggregated model based on general data training Compared with Feature Selection Model and disaggregated model based on user data training can more in many aspects classify user images.
In addition, electronic equipment 100, which can be automatically or based on user, inputs the group for generating the group that multiple images are classified into Group name.The group name of group can be automatically generated in electronic equipment 100 by using language model.Electronic equipment 100 can lead to It crosses and is compared to detect by the similarity between the depth characteristic of similarity and image between the keyword in language model Keyword corresponding with image.
For example, the group name of the first group can when keyword (or label) " beasle dog " is assigned to the first group It is configured to " beasle dog ".Here, by using language model, it may be determined that there is second keyword of " beasle dog " and be included in The depth characteristic of image in first group and include similarity (depth between the depth characteristic of the image in the second group The distance between feature).The second determining keyword can be assigned to the group name of the second group.Electronic equipment 100 can base It is determined and the first group to third group in the first group to third group is the information of group corresponding with " dog " in Fig. 4 Corresponding group name.For example, by by the similarity between junior's keyword of " dog " in language model and being included in the Similarity between the depth characteristic of image of one group into third group is compared, and electronic equipment 100 can determine and wrap Include the image in the first group depth characteristic, include the depth characteristic of image in the second group and be included in third group The corresponding keyword of each of depth characteristic of image in group.When keyword is determined, electronic equipment 100 can will be true Fixed keyword is assigned as the group name of each group.Therefore, the first group can have group name " beasle dog ", and second group Group can have group name " Poodle ", and third group can have group name " Anne ", and the 4th group can have group name " golden retriever ", the 5th group can have group name " interior ", and the 6th group can have group name " outdoor ".This Outside, electronic equipment 100 can produce with the corresponding file of the first group to each of the 6th group, and would be classified as same The image of one group is stored in same file folder.In addition, electronic equipment 100 can determine and include the image difference in group Corresponding keyword.
Fig. 6 A and Fig. 6 B are that the use shown according to an example embodiment of the present disclosure passes through the spy using user data training Sign extracts the diagram for the exemplary method that model and disaggregated model classify multiple images.
Fig. 6 A is shown will be multiple using the Feature Selection Model and disaggregated model trained by the data using the first user The result of image classification.
Referring to Fig. 6 A, the image of the first user may include the first image to the 11st image B1, B2, B3, B4, B5, B6, B7, B8, B9, B10 and B11.Here, the image of the first user can be such as but not limited to be schemed by first user device Picture, and the image that can be stored in first user device, the image that is captured by first user device or from outside Received image of equipment etc..
Fisrt feature can be used to extract model and the first disaggregated model for the first figure for first user device according to the embodiment As B1 to the 11st image B11 is categorized into particular demographic.Fisrt feature, which is extracted model and the first disaggregated model and be can be, uses the The Feature Selection Model and disaggregated model of the image training of one user.
Fisrt feature extracts the depth characteristic of each of image of the first user of model extraction, and the first classification mould Type can each of image based on the first user depth characteristic by the image classification of the first user at particular demographic.This In, the type or quantity for the particular demographic that the image of the first user is classified into can be by via first learnt based on user data Feature Selection Model and the first disaggregated model determine, or can be set based on user's input.For example, in the image of the first user Among, the first image B1 and the second image B2 can be classified into the first group, and the 7th image B7 and the 8th image B8 can be classified At the second group, third image, the 6th image and the 9th image can be classified into third group, the tenth image B10 and the 11st Image B11 can be classified into the 4th group, and the 4th image B4 and the 5th image B5 can be classified into the 5th group, and third figure As B3 and the 9th image B9 can be classified into the 6th group.
In addition, first user device can be automatically or based on user's input to generate the first image B1 to the 11st image The group name of the first group that B11 is classified into the 6th group.For example, first user device can be by using being included in The depth characteristic information and language model of first image B1 to ten one image B11 of one group into the 6th group is come automatically Generate group name or keyword.
For example, the first group can have group name " food ", the second group can have group name " dog ", third group There can be group name " landscape ", the 4th group can have group name " people ", and the 5th group can have group name " document ", 6th group can have group name " road ".
Fig. 6 B is shown will be multiple using the Feature Selection Model and disaggregated model trained by the data using second user The result of image classification.
Referring to Fig. 6 B, the image of second user may include the first image to the 11st image C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 and C11.Here, the image of second user can be such as but not limited to obtain by second user device Image, and the image that can be stored in second user device, the image that is captured by second user device or from outer Received image of portion's equipment etc..
Second feature can be used to extract model and the second disaggregated model for the first figure for second user device according to the embodiment As C1 to the 11st image C11 is categorized into particular demographic.Second feature, which is extracted model and the second disaggregated model and be can be, uses the The Feature Selection Model and disaggregated model of the image training of two users.
Second feature extracts the depth characteristic of each of image of model extraction second user, and the second classification mould Type can each of image based on second user depth characteristic by the image classification of second user at particular demographic.This In, the type or quantity for the particular demographic that the image of second user is classified into can be by via second learnt based on user data Feature Selection Model and the second disaggregated model determine, or are set based on user's input.For example, in the image of second user In, the first image C1, third image C3 and the 7th image C7 can be classified into the first group, the second image C2 and the 9th image C9 It can be classified into the second group, third image C3 and the 6th image C6 can be classified into third group, the 4th image C4 and the 5th Image C5 can be classified into the 4th group, and the 7th image C7 and the 8th image C8 can be classified into the 5th group, the tenth image C10 It can be classified into the 6th group with the 11st image C11, the first image C1, the 5th image C5 and the 11st image C11 can be divided Class is at the 7th group.
In addition, second user device can be automatically or based on user's input to generate the first image C1 to the 11st image The group name of the first group that C11 is classified into the 7th group.Second user device can be by using being included in first group The depth characteristic information of first image C1 to ten one image C11 of the group into the 7th group and language model are automatically generated Group name or keyword.
For example, the first group can have group name " road ", the second group can have group name " river ", third group Group can have group name " traditional South Korea house ", and the 4th group can have at group name " seabeach ", and the 5th group can have Group name " forest ", the 6th group can have group name " building ", and the 7th group can have group name " sky ".
As referring to described in Fig. 6 A and Fig. 6 B, number of users is can be used in Feature Selection Model and disaggregated model according to the embodiment According to being trained to and update.
The Feature Selection Model and disaggregated model updated according to embodiment can be by first group among the group classified in advance The multiple images for including in group reclassify at least two groups, or can by the group classified in advance the first group and The multiple images for including in second group reclassify into a group.For example, first of the image optimization for the first user Third image B3 and the 6th image B6 can be categorized into group (" landscape " group by Feature Selection Model and the first disaggregated model Group).On the other hand, extracting model and the second disaggregated model for the second feature of the image optimization of second user can will be with third The identical first image C1 of image B3 and fiveth image C5 identical from image B6 be categorized into different group (" road " group and " seabeach " group).
Therefore, electronic equipment 100 according to the embodiment provides the Feature Selection Model and disaggregated model according to user optimization, Therefore classification standard rather than a classification standard according to user optimization can be used that multiple images are classified.
Fig. 7 A is the exemplary operations for showing server 200 and electronic equipment 100 according to an example embodiment of the present disclosure Flow chart.
Referring to Fig. 7 A, in operation S710, general data training characteristics can be used to extract model and classification mould for server 200 Type.
For example, server 200 can be trained based on general data by the standard of image classification.Server 200 can be based on general The standard of the depth characteristic of multiple images is extracted in data training.In addition, server 200 can train the depth based on described multiple images Described multiple images are categorized into particular demographic by the similarity between degree feature.
In operation S720, electronic equipment 100 can receive Feature Selection Model and disaggregated model from server 200.
In operation S730, electronic equipment 100 can get described multiple images.
In operation S740, electronic equipment 100, which can be used, to be extracted from the received Feature Selection Model of server 200 about described The depth characteristic of multiple images.
In operation S750, electronic equipment 100 can be based on the depth characteristic from 200 received disaggregated model and extraction of server Described multiple images are classified.
Operating S730 to operation S750 can be for example corresponding to operation S230 difference to the operation S210 of Fig. 2.
In operation S760, the result that described multiple images are classified can be sent to server 200 by electronic equipment 100.
Operation S770, server 200 can be used from the received result of electronic equipment 100 determine Feature Selection Model and/ Or whether disaggregated model needs to be updated, and trains and update at least one mould in Feature Selection Model and disaggregated model Type.
In operation S780, electronic equipment 100 can be from the reception of server 200 training and the Feature Selection Model updated and classification Model.
In operation S790, electronic equipment 100 can be used the Feature Selection Model updated and/or disaggregated model will be the multiple Image classification.
In addition, electronic equipment 100 can extract the depth characteristic for the multiple images classified in advance again, and it is based on mentioning again The depth characteristic taken reclassifies described multiple images.
In addition, the operation S710 of Fig. 7 A to operation S790 can be by electronic equipment 100 according to the embodiment or server 200 It executes.For example, in fig. 7, operation S710 to operation S770 is executed by server 200, but embodiment is without being limited thereto, operation S710 to operation S770 can be executed by electronic equipment 100.In addition, in fig. 7, operation S730 to operation S750 and operation S790 It can be executed by electronic equipment 100, but embodiment is without being limited thereto, operation S730 to operation S750 and operation S790 can be by servers 200 execute.
Fig. 7 B be show according to an example embodiment of the present disclosure to server 200, first processor 120a and second at The flow chart for the exemplary method that reason device 120b is operated.
Referring to Fig. 7 B, electronic equipment 100 may include first processor 120a and second processor 120b.
First processor 120a can for example control the operation for being mounted on the application of at least one of electronic equipment 100, and Graphics process is executed to the image (for example, live view image or image of capture) obtained by electronic equipment 100.First processing Device 120a may include various processing circuits and be integrated with central processing unit (CPU), graphics processing unit (GPU), communication The form of the system on chip (SoC) of chip and sensor is implemented.In addition, first processor 120a can be described as herein Application processor (AP).
Second processor 120b can be used Feature Selection Model and disaggregated model by image classification.
In addition, second processor 120b can for example use data identification model (for example, Feature Selection Model and data point Class model) form of AI proprietary hardware chip that executes image classification function manufactured.According to embodiment, AI proprietary hardware chip It may include the GPU of the data identification model for using visual analysis as element technology.
In addition, electronic equipment 100 may include the third processor for executing all or some function of second processor 120b Or fourth processor replaces second processor 120b.
According to embodiment, can be executed by the application for storing in memory and performing various functions by first processor The function that 120a is executed.The function that can be executed by operating system (OS) Lai Zhihang of electronic equipment 100 by second processor 120b Energy.
For example, camera applications can produce image and send an image to the OS including data identification model.In addition, display figure The picture library application of picture can be received from OS by using the extracted image of the image for being sent to OS, and display mentions over the display The image taken.
Referring to Fig. 7 B, in operation S7110, server 200 can extract model and classification mould based on general data training characteristics Type.
Feature Selection Model and disaggregated model can be sent to electronic equipment 100 by server 200.For example, operating S7120, electronic equipment 100 can be set such that Feature Selection Model and disaggregated model can be used in second processor 120b.
It can get multiple images in operation S7130, first processor 120a.
The image of acquisition can be sent to second processor 120b in operation S7140, first processor 120a.
Operation S7150, second processor 120b can be used from the received Feature Selection Model of server 200 extract about The depth characteristic of described multiple images.
In operation S7160, second processor 120b can be based on the depth from 200 received disaggregated model and extraction of server Feature classifies described multiple images.
It can determine whether Feature Selection Model and/or disaggregated model need by more in operation S7170, second processor 120b Newly, and train and update at least one model in Feature Selection Model and/or disaggregated model.
It can be used the Feature Selection Model updated and/or disaggregated model will be described in operation S7180, second processor 120b Multiple images classification.
In addition, second processor 120b can extract the depth characteristic for the multiple images classified in advance again, and based on weight The depth characteristic newly extracted reclassifies described multiple images.
Fig. 7 C be show according to an example embodiment of the present disclosure to server 200, first processor 120a, second processing The flow chart for the exemplary method that device 120b and third processor 120c are operated.
Referring to Fig. 7 C, electronic equipment 100 may include first processor 120a, second processor 120b and third processor 120c。
Referring to Fig. 7 C, in operation S7210, server 200 can extract model and classification mould based on general data training characteristics Type.
In operation S7220, Feature Selection Model and disaggregated model can be sent to electronic equipment 100 by server 200.Example Such as, electronic equipment 100 can be set such that second processor 120b is made using Feature Selection Model and third processor 120c Use disaggregated model.
It can get multiple images in operation S7230, first processor 120a.
The image of acquisition can be sent to second processor 120b in operation S7240, first processor 120a.
Operation S7250, second processor 120b can be used from the received Feature Selection Model of server 200 extract about The depth characteristic of described multiple images.
The depth characteristic of extraction and image can be sent to third processor in operation S7260, second processor 120b 120c。
Described multiple images can be classified based on depth characteristic and disaggregated model in operation S7270, third processor 120c.
The result of classification can be sent to second processor 120b in operation S7280, third processor 120c.
It can determine whether Feature Selection Model needs to be updated and use in operation S7290, second processor 120b to divide The result training characteristics of class extract model.
In operation 7300, third processor 120c can determine whether disaggregated model needs to be updated, and use classification As a result train classification models.
The Feature Selection Model updated can be used to extract described multiple images in operation S7310, second processor 120b Depth characteristic.In addition, second processor 120b can extract the depth for being previously extracted the described multiple images of depth characteristic again Spend feature.
Described multiple images and the depth characteristic of extraction can be sent to third in operation S7320, second processor 120b Processor 120c.
The disaggregated model updated can be used that described multiple images are classified in operation S7330, third processor 120c.This Outside, third processor 120c can reclassify the described multiple images classified in advance.
Fig. 8 A is the stream for showing the exemplary method operated to electronic equipment 100 according to an example embodiment of the present disclosure Cheng Tu.
Referring to Fig. 8 A, in operation S810, electronic equipment 100 can get the first image.Here, the first image can be by electricity Image that is that sub- equipment 100 captures or being stored in advance in electronic equipment 100.Optionally, the first image can be from external equipment It is received.
In operation S820, electronic equipment 100 can be extracted special about the depth of the first image by using Feature Selection Model Sign.For example, electronic equipment 100 can be by being input to the first image including at least one nerve net in Feature Selection Model Network extracts the depth characteristic of the first image.
In operation S830, electronic equipment 100 can extract special with the depth with the first image from pre-stored image Levy at least one image of similar depth characteristic.
Electronic equipment 100 can phase between depth characteristic and the depth characteristic of pre-stored image based on the first image Image similar with the first image is extracted by using disaggregated model like degree.For example, electronic equipment 100 can be from being stored in advance Image zooming-out and the first image depth characteristic vector image of difference within the scope of preset.
In operation S840, electronic equipment 100 can show the image of extraction over the display.
Fig. 8 B be show according to an example embodiment of the present disclosure to including the first processor in electronic equipment 100 The flow chart for the exemplary method that 120a and second processor 120b are operated.
Referring to Fig. 8 B, electronic equipment 100 may include first processor 120a and second processor 120b.
Referring to Fig. 8 B, the first image can get in operation S8110, first processor 120a.
The first image can be sent to second processor 120b in operation S8120, first processor 120a.
Feature Selection Model can be used to extract in operation S8130, second processor 120b special about the depth of the first image Sign.
The depth characteristic of disaggregated model and the first image can be used to deposit from advance in operation S8140, second processor 120b At least one image with depth characteristic similar with the depth characteristic of the first image is extracted among the image of storage.For example, the Two processor 120b can be from the difference of the vector of the depth characteristic of pre-stored image zooming-out and the first image in preset model Image within enclosing.
At least one image described in extraction can be sent to first processor in operation S8150, second processor 120b 120a。
At least one described image of extraction can be shown over the display in operation S8160, first processor 120a.
Fig. 8 C be show according to example embodiment to including first processor 120a in electronic equipment 100, second The flow chart for the exemplary method that processor 120b and third processor 120c are operated.
Referring to Fig. 8 C, the first image can get in operation S8210, first processor 120a.
The first image can be sent to second processor 120b in operation S8220, first processor 120a.
Feature Selection Model can be used to extract in operation S8230, second processor 120b special about the depth of the first image Sign.
The depth characteristic about the first image can be sent to third processor in operation S8240, second processor 120n 120c。
The depth characteristic of disaggregated model and the first image can be used to deposit from advance in operation S8250, third processor 120c At least one image with depth characteristic similar with the depth characteristic of the first image is extracted among the image of storage.For example, the Two processor 120b can be from the difference of the vector of the depth characteristic of pre-stored image zooming-out and the first image in preset model Image within enclosing.
At least one image described in extraction can be sent to first processor in operation S8260, third processor 120c 120a。
At least one described image of extraction can be shown over the display in operation S8270, first processor 120a.
Fig. 9 and Figure 10 is the search image executed by electronic equipment 100 shown according to an example embodiment of the present disclosure The diagram of exemplary method.
Referring to Fig. 9, electronic equipment 100 according to the embodiment can get the first image 910.Here, the first image 910 can be with It is image being captured by electronic equipment 100 or being stored in advance in electronic equipment 100.On the other hand, the first image 910 can To be from the received image of external equipment.
The depth characteristic of Feature Selection Model the first image 910 of extraction can be used in electronic equipment 100.Feature Selection Model can Including at least one neural network, and depth characteristic may include by the way that the first image 910 is input at least one described mind The vector extracted through network from least one layer.
Here, the depth characteristic of the first image 910 can indicate include object in the first image 910 feature, such as It can indicate the feature of " dog ", but embodiment is without being limited thereto.
Electronic equipment 100 can depth characteristic by using disaggregated model based on the first image 910 and pre-stored more Similarity search image similar with the first image 910 between the depth characteristic of a image 920.It is pre-stored the multiple Image 920 may include the depth characteristic extracted from each of image 920.For example, depth characteristic can be stored as multiple figures The EXIF information of each of picture.Here, the depth characteristic of the depth characteristic and described multiple images 920 of the first image 910 It can be the depth characteristic extracted using a neural network with a version.The depth characteristic of first image 910 and institute State multiple images 920 depth characteristic can be the depth characteristic extracted from the identical sub-network of neural network or from The depth characteristic that the identical layer of one neural network extracts.
Electronic equipment 100 can extract among described multiple images 920 to exist with the difference of the depth characteristic of the first image 910 Image within the scope of preset.For example, as shown in Figure 9, among described multiple images 902, the first image to the 8th Image 930 can be extracted and be shown as image similar with the first image 910.
When user inputs keyword (for example, dog, beasle dog or puppy) corresponding with " dog " to search for " dog " image, Electronic equipment 100 can be searched only for by using keyword storage image, and if user with " dog " incoherent key Word (for example, happy) stores " dog " image, then electronic equipment 100 can not be found stores according to " dog " incoherent keyword " dog " image.Therefore, user has to remember the keyword for image storage to find image.
However, as described referring to Fig. 9, electronic equipment 100 according to the embodiment using the image including " dog " rather than The depth characteristic of " dog " image is extracted using keyword, and is searched for depth characteristic similar with the depth characteristic of extraction Image, therefore user can find desired image in the case where that need not remember all keywords.
In addition, referring to Fig.1 0, electronic equipment 100 according to example embodiment can get the second image 1010.Electronic equipment 100 usable Feature Selection Models extract the depth characteristic of the second image 1010.The depth characteristic of second image 1010 can indicate Feature, the feature of night scene, the feature in river of bridge, but not limited to this.
Electronic equipment 100 can depth characteristic by using disaggregated model based on the second image 1010 and pre-stored institute State the similarity search image similar with the second image 1010 between the depth characteristic of multiple images 920.For example, electronic equipment 100 can extract among described multiple images 920 and show with the difference of the depth characteristic of the second image 1010 in preset model Image 1030 within enclosing is as image similar with the second image 1010.
User has to input relevant keyword (bridge and night scene) to search for " night scene for having bridge " image.So And as described in referring to Fig.1 0, electronic equipment 100 according to the embodiment is by using " night scene for having bridge " image rather than makes The depth characteristic of " night scene for having bridge " image is extracted with keyword, and is searched for similar with the depth characteristic of extraction The image of depth characteristic, therefore user can find desired image in the case where that need not remember all keywords.
In addition, electronic equipment 100 can execute picture search by the keyword of input.For example, when keyword is entered When, electronic equipment 100 can be determined by using language model with the similar keyword of keyword that inputs, and search with it is described The similar corresponding image of keyword.When the bookmark name for the first image is corresponding to the similar keyword, electronics Equipment 100 can extract the first image and the first image is shown as image search result.In addition, when image group name with When the similar keyword is corresponding, it includes the second image in the group of the group name that electronic equipment 100 is extractable And the second image is shown as image search result.However, embodiment is without being limited thereto.
Figure 11 is the block diagram for showing the example arrangement of electronic equipment 100 according to an example embodiment of the present disclosure.
Referring to Fig.1 1, electronic equipment 100 according to the embodiment may include processor (e.g., including processing circuit) 120, aobvious Show device 140 and memory 130.
Display 140 according to the embodiment is by converting the picture signal handled by processor 120, data-signal, shielding Display (OSD) signal controls signal to generate driving signal.Display 140 can be implemented as plasma display panel (PDP), liquid crystal display (PDP), organic light emitting display (OLED) or flexible display, or three-dimensional (3D) can be implemented as Display etc., but not limited to this.In addition, display 140 can be configured to the touch for being used as input unit and output device Screen.
140 displayable image of display according to the embodiment.Here, the image shown on display 140 can be by Image that electronic equipment 100 captures, the image that is stored in electronic equipment 100 or from the received image of external equipment to A few image.However, embodiment is without being limited thereto.
Processor 120 according to the embodiment may include various processing circuits and run storage in memory 130 extremely A few program.Processor 120 may include such as, but not limited to monokaryon, double-core, three cores, four cores or multicore.In addition, processor 120 may include multiple processors.For example, processor 120 may include primary processor (not shown) and operate in a sleep mode Sub-processor (not shown).
Memory 130 according to the embodiment can store for drive and various types of data of controlling electronic devices 100, Program or application.
The program of storage in memory 130 may include at least one instruction.It can be run by processor 120 and be stored in storage Program (at least one instruction) or application in device 130.
Processor 120 according to the embodiment can run at least one the described instruction of storage in memory 130 to obtain Multiple images simultaneously extract the depth characteristic about described multiple images using Feature Selection Model.For example, Feature Selection Model can Including first nerves network, and processor 120 can be by being input to first nerves net for each of described multiple images Network from least one layer of extraction of first nerves network include the vector in depth characteristic.Extraction can be used in processor 120 Described multiple images are categorized into particular demographic by depth characteristic and disaggregated model.For example, disaggregated model may include based on described more Described multiple images are categorized into the nervus opticus network of particular demographic by the similarity between the depth characteristic of a image.
In addition, processor 120 can run at least one the described instruction of storage in memory 130 with described in classifying Multiple images store together with depth characteristic.In addition, processor 120 can control display 140 to show classifying as a result, passing through makes The necessity for updating Feature Selection Model and disaggregated model is determined with the result of classification, and training and more based on the definitive result New feature extracts at least one model in model and disaggregated model.When at least one of Feature Selection Model and disaggregated model When model is updated, described in processor 120 will can be classified in advance by using the Feature Selection Model and disaggregated model of update Multiple images reclassify.
The extractable depth characteristic about the first image of processor 120, and mentioned among the described multiple images of classification Take at least one image with depth characteristic similar with the depth characteristic of the first image.
Figure 12 is the block diagram for showing processor 120 according to an example embodiment of the present disclosure.
Referring to Fig.1 2, processor 120 according to the embodiment may include data unit (e.g., including processing circuit and/ Or program element) 1300 and data sorting unit (e.g., including processing circuit and/or program element) 1400.
Data unit 1300 may include being configured as study to calibrate standard really for classifying the image as particular demographic Processing circuit and/or program element.Which data is data unit 1300, which can learn, will be used to classify the image as spy Grouping group and how by using data image classification to be calibrated into standard really.Data unit 1300, which can get, to be used for The data of study, and learn for by by data application in the data classification model being described later on by image classification really Calibration is quasi-.Data classification model according to the embodiment may include Feature Selection Model and disaggregated model.
Data sorting unit 1400 can be based on data for image classification.The data of study can be used in data sorting unit 1400 Multiple images are categorized into particular demographic by disaggregated model.Data sorting unit 1400 can be obtained according to via the preset standard of study Specific data, and specific data are based on by image using the data classification model that the specific data that will be obtained is used as input value Classification.In addition, can be used for by the end value exported by the data classification model that the specific data of acquisition is used as to input value Update data classification model.
Data unit 1300 and data sorting unit 1400 can be manufactured in the form of at least one hardware chip And it is included in electronic equipment.For example, at least one of data unit 1300 and data sorting unit 1400 are single Member can be manufactured or be can be manufactured such that existing general processor (example in the form of artificial intelligence (AI) proprietary hardware chip Such as, central processing unit (CPU) or application processor) or figure application specific processor (for example, graphics processing unit (GPU)) A part and can be included in any electronic equipment.
In this case, data unit 1300 and data sorting unit 1400 can be included in an electronic equipment In or be included in different electronic equipments.For example, one of data unit 1300 and data sorting unit 1400 can quilts Can be included including another unit in the electronic device and in data unit 1300 and data sorting unit 1400 In the server.It optionally, can be via wire or wirelessly general in data unit 1300 and data sorting unit 1400 Data sorting unit 1400 is supplied to by the model information that data unit 1300 constructs or will enter into data classification The data of unit 1400 are supplied to data unit 1300 as other learning data.
In addition, at least one unit in data unit 1300 and data sorting unit 1400 can be with software module It realizes.When at least one unit in data unit 1300 and data sorting unit 1400 with software module (or including refer to The program module of order) realize when, software module can be stored in non-transitory computer readable recording medium.Can by OS or Specific application provides software module.Optionally, a part of software module can be provided by OS, and can be answered by specific For providing the remainder of software module.
Figure 13 is the block diagram for showing sample data unit 1300 according to an example embodiment of the present disclosure.
Referring to Fig.1 3, data unit 1300 according to the embodiment may include data obtainer (e.g., including processing electricity Road and/or program element) 1310, preprocessor (e.g., including processing circuit and/or program element) 1320, learning data choosing Select device (e.g., including processing circuit and/or program element) 1340, model learning device (e.g., including processing circuit and/or journey Sequence element) 1340 and model evaluation device (e.g., including processing circuit and/or program element) 1350, but not limited to this.According to reality Example is applied, data unit 1300 may include some components in components above.For example, data unit 1300 can only include Data obtainer 1310 and model learning device 1340.In addition, according to embodiment, data unit 1300 may also include in addition to Component except upper component.
Data obtainer 1310 can get multiple images are categorized into particular demographic needed for data.Data obtainer 1310 It can get study and required data of classifying carried out to image.
Data obtainer 1310 can get a plurality of image data.For example, data obtainer 1310 can be by including data The camera for practising the electronic equipment of unit 1300 receives image data.Optionally, data obtainer 1310 can by can with include number Image data is received according to the external equipment that the electronic equipment of unit 1300 communicates.
Preprocessor 1320 can pre-process the data of acquisition so that the data obtained can be used for during study Classify to image.The data of acquisition can be processed into preset format and the data obtained are retouched later by preprocessor 1320 The model training 1340 stated uses.
Data needed for learning data selector 1330 can select study from pretreated data.The data of selection can be mentioned Supply model learner 1340.Learning data selector 1330 can according to for by the preset standard of image classification from pretreated Data needed for data selection study.In addition, learning data selector 1330 can be according to via the model learning device being described later on 1340 study and preset standard select data.
Model learning device 1340 can learn that image classification is calibrated standard really based on learning data.In addition, mould Type learner 1340 can learn be used for about which learning data by the selection criteria of image classification.
For example, the depth characteristic that model learning device 1340 can learn to extract multiple images calibrates standard really, and learn base Described multiple images are categorized into particular demographic by the similarity between the depth characteristic of described multiple images.Here, can pass through The distance between the vector extracted from depth characteristic carrys out the similarity between indicated depth feature, wherein when between vector away from From the height of similarity in short-term, and similarity is low when the distance between vector is long.In addition, the distance between vector is in preset model Multiple images within enclosing can be classified into a group.
In addition, model learning device 1340 can train the data classification for multiple images to be classified by using learning data Model.At this point, data classification model can be the model constructed in advance.For example, can by receive basic studies data (for example, Sample image) to construct data classification model in advance.
It is contemplated that the computer performance of the application field of data application model, the aim of learning or electronic equipment constructs number According to disaggregated model.Data classification model can be model for example neural network based.For example, DNN, CNN, RNN or BRDNN etc. It is used as data classification model, but embodiment is without being limited thereto.
According to embodiment, when there are multiple data classification models constructed in advance, model learning device 1340 can will have The data classification model of high correlation between the learning data and basic studies data of input is determined as the data that will be learnt Disaggregated model.In this case, basic studies data can be classified in advance according to data type, and can be according to data type Building data classification model in advance.For example, (can such as generate the region of basic studies data based on various standards, generate substantially The time of learning data, the size of basic studies data, the type of basic studies data, basic studies data generator and base The type of object in this learning data) basic studies data are classified in advance.
In addition, the training algorithm declined including error back propagation or gradient can be used to train for example for model learning device 1340 Data classification model.
In addition, model learning device 1340 can train number via learning data to be for example used as to the supervised study of input value According to disaggregated model.In addition, model learning device 1340 can be via non-supervisory formula learning training data classification model, wherein in non-prison It superintends and directs in formula study, self study is carried out to seek by the type to data needed for the situation of determination without individually supervising Look for the standard for determining situation.In addition, model learning device 1340 can carry out training data disaggregated model via such as intensified learning, Wherein, whether intensified learning uses the result about image classification correctly to feed back.
In addition, model learning device 1340 can store trained data classification model after having trained data classification model. At this point, model learning device 1340 can sort data into the storage that model is stored in the electronic equipment including data sorting unit 1400 In device.Trained data classification model can be stored in including later by the data sorting unit of description by model learning device 1340 In the memory of 1400 electronic equipment.Trained data classification model can be stored in via wired network by model learning device 1340 Network or wireless network connection are into the memory of the server of electronic equipment.
Here, store trained data classification model memory can also store for example with electronic equipment other at least one The relevant instruction of a component or data.In addition, memory can store software and/or program.Program may include such as kernel, centre Part, Application Programming Interface (API) and/or application program (or " application ").
Assessment data can be input in data classification model by model evaluation device 1350, and when according to assessment data output Recognition result when being unsatisfactory for specific criteria, the training data disaggregated model again of model learning device 1340 can be made.Here, it assesses Data can be the data for being preset for assessment data classification model.
For example, the classification results among the multiple classification results for assessing data ought be directed to not in trained data classification model When the quantity or ratio of accurate assessment data are more than preset threshold value, model evaluation device 1350 can determine that recognition result is unsatisfactory for Specific criteria.For example, working as, specific criteria is 2% and trained data identification model is directed to and surpasses among 1000 assessment data When crossing 20 assessment data output error recognition results, model evaluation device 1350 can determine that trained data classification model is uncomfortable It closes.
In addition, model evaluation device 1350 can assess trained data point when there are the data classification model of multiple training Whether each of class model meets specific criteria, and the data classification model for meeting the training of specific criteria is determined as Final data classification model.Here, when there is the multiple data classification models for meeting specific criteria, model evaluation device 1350 The data classification model of one or preset quantity can will be determined as to final data classification mould according to the sequence of high assessment score Type.
In addition, data obtainer 1310, preprocessor 1320 in data unit 1300, learning data selector 1330, at least one of model learning device 1340 and model evaluation device 1350 can be made in the form of at least one hardware chip It makes and is included in electronic equipment.For example, data obtainer 1310, preprocessor 1320, learning data selector 1330, At least one of model learning device 1340 and model evaluation device 1350 can be manufactured with AI proprietary hardware chip, or can quilt It is fabricated to one of existing general processor (for example, CPU or application processor) or figure application specific processor (for example, GPU) Divide and can be included in any electronic equipment described above.
In addition, data obtainer 1310, preprocessor 1320, learning data selector 1330,1340 and of model learning device Model evaluation device 1350 can be included in an electronic equipment or different electronic equipments.For example, data obtainer 1310, Some in preprocessor 1320, learning data selector 1330, model learning device 1340 and model evaluation device are included in In electronic equipment, and remainder can be included in server.
In addition, data obtainer 1310, preprocessor 1320, learning data selector 1330,1340 and of model learning device At least one of model evaluation device 1350 can be realized with software module.When data obtainer 1310, preprocessor 1320, learn At least one of data selector 1330, model learning device 1340 and model evaluation device 1350 are practised with software module (or packet Include the program module of instruction) realize when, software module can be stored in non-transitory computer readable recording medium.It is optional Ground can be provided a part of software module by OS and can provide the remainder of software module by specific application.
Figure 14 is the block diagram for showing sample data taxon 1400 according to an example embodiment of the present disclosure.
Referring to Fig.1 4, data sorting unit 1400 according to the embodiment may include data obtainer (e.g., including processing electricity Road and/or program element) 1410, preprocessor (e.g., including processing circuit and/or program element) 1420, classification data choosing Select device (e.g., including processing circuit and/or program element) 1430, classification results provider (e.g., including processing circuit and/ Or program element) 1440 and model modification device (e.g., including processing circuit and/or program element) 1450.However, embodiment is not It is limited to this.According to embodiment, data sorting unit 1400 may include some components in components above.For example, data classification list Member 1400 can only include data obtainer 1410 and classification results provider 1440.According to another embodiment, data sorting unit 1400 may also include the component other than components above.
Data needed for data obtainer 1410 can get image classification, and preprocessor 1420 can be to the data of acquisition It is pre-processed so that the data obtained are used for image classification.The data of acquisition can be processed into preset by preprocessor 1420 Format makes the data obtained be used for image classification.
Data needed for classification data selector 1430 can select image classification from pretreated data.The data of selection can It is provided to classification results provider 1440.Classification data selector 1430 can select some or all pre- based on preset standard The data of processing are used for image classification.In addition, classification data selector 1430 can be learned according to via by model learning device 1340 It practises and preset standard selects data.
Classification results provider 1440 can by by the data application of selection in data classification model by image classification.Point Class result provider 1440 can provide classification results according to the classification purpose of data.Classification results provider 1440 can will by point The data that class data selector 1430 selects are used as the data application that will select of input value in data classification model.In addition, can Classification results are determined by data classification model.
For example, classification results provider 1440 can provide the result that multiple images are categorized into particular demographic.It is classified into The image of one group can be stored in same file folder.
In addition, classification results provider 1440 can estimate another image similar with image according to embodiment.For example, point Class result provider 1440 can estimate with the first image (for example, pre-stored image or newly one of image for inputting) The image of the similar depth characteristic of depth characteristic.
Model modification device 1450 can be updated based on the estimation to the classification results provided by classification results provider 1440 Data classification model.For example, model modification device 1450 can be determined according to the classification results provided by classification results provider 1440 Whether data classification model, which needs, is updated, and when needing to update data classification model, more using model learning device 1340 New data disaggregated model.Model learning device 1340 can be by using the image data re -training data classification model of user with more New data disaggregated model.
In addition, data obtainer 1410, preprocessor 1420 in data sorting unit 1400, classification data selector 1430, at least one of classification results provider 1440 and model modification device 1450 can be in the form of at least one hardware chips It is manufactured and can be included in electronic equipment.For example, data obtainer 1410, preprocessor 1420, classification data select At least one of device 1430, classification results provider 1440 and model modification device 1450 can be made with AI proprietary hardware chip Make, or can be manufactured such that existing general processor (for example, CPU or application processor) or image application specific processor (for example, GPU a part), and can be included in above-mentioned any electronic equipment.
In addition, data obtainer 1410, preprocessor 1420, classification data selector 1430, classification results provider 1440 and model modification device 1450 can be included in an electronic equipment or on different electronic equipments.For example, data obtain It takes in device 1410, preprocessor 1420, classification data selector 1430, classification results provider 1440 and model modification device 1450 It is some be included in electronic equipment, and remainder can be included in server.
In addition, data obtainer 1410, preprocessor 1420, classification data selector 1430, classification results provider At least one of 1440 and model modification device 1450 can be realized with software module.When data obtainer 1410, preprocessor 1420, at least one of classification data selector 1430, classification results provider 1440 and model modification device 1450 are with software When module (or program module including instruction) is realized, software module can be stored in non-transitory computer readable recording medium In.Software module can be provided by OS or specific application.Optionally, a part of software module can be provided by OS and can be led to It crosses specific application and the remainder of software module is provided.
Figure 15 is to show electronic equipment 1000 according to the embodiment and work in coordination with server 2000 to learn together and identify The exemplary diagram of data.
Referring to Fig.1 5, server 2000 can analyze user images to learn for by the standard of image classification, and And electronic equipment 100 can be classified multiple images based on the learning outcome of server 2000.Server 2000 may include that data obtain Take device (e.g., including processing circuit and/or program element) 2310, preprocessor (e.g., including processing circuit and/or program Element) 2320, learning data selector (e.g., including processing circuit and/or program element) 2330, model learning device (for example, Including processing circuit and/or program element) 2340 and model evaluation device (e.g., including processing circuit and/or program element) 2350.The respective element of these elements and electronic equipment is same or similar, therefore can not repeat like this to these here The detailed description of element.
Here, the function of the model learning device 1340 of Figure 13 can be performed in the model learning device 2340 of server 200.Model Practising device 2340 can learn to extract the standard of the depth characteristic of multiple images, and learn the depth characteristic based on described multiple images Between similarity described multiple images are categorized into particular demographic.Model learning device 2340 can get the number that will be used to learn According to, and by the data application of acquisition in data classification model to learn to be used for the standard that described multiple images are classified.
In addition, the classification results provider 1440 of electronic equipment 1000 can be by that will be selected by classification data selector 1430 Data application described multiple images are classified in the data classification model generated by server 2000.For example, classification results The data selected by classification data selector 1430 can be sent to server 2000, and request server by provider 1440 2000 data application by will be selected by classification data selector 1430 divides described multiple images in data classification model Class.In addition, classification results provider 1440 can provide the result that described multiple images are categorized into particular demographic.It is classified into one The image of a group can be stored in same file folder.
What the classification results provider 1440 of electronic equipment 1000 can be generated from the reception of server 2000 by server 2000 Disaggregated model analyzes image by using received disaggregated model, and described multiple images is classified.In this case, The classification results provider 1440 of electronic equipment 1000 can by the data application that will be selected by classification data selector 1430 in Classify from the received disaggregated model of server 200 by described multiple images.When electronic equipment 1000 is received from server 2000 When disaggregated model is to classify described multiple images, user data (the multiple user images) need not be sent to server In the case where 2000, secure user data and personal information protection can be reinforced.
Figure 16 is the block diagram for showing the example arrangement of electronic equipment 300 of another example embodiment according to the disclosure.Figure 16 electronic equipment 300 can be the example of the electronic equipment 100 of Fig. 1.
Referring to Fig.1 6, electronic equipment 300 according to the embodiment may include controller (e.g., including processing circuit) 330, pass Sensor cell (e.g., including sensing circuit) 320, communication unit (e.g., including telecommunication circuit) 340, output unit (for example, Including output circuit) 350, input unit (e.g., including input circuit) 360, audio/video (A/V) input unit (for example, Including A/V input circuit) 370 and storage unit 380.
The controller 330 of Figure 16 can be corresponding to the processor 120 of Figure 11, and the storage unit 380 of Figure 16 can be deposited with Figure 11 Reservoir 130 is corresponding, and the display 351 of Figure 16 can be corresponding to the display 140 of Figure 11.Therefore, here not repeat and Figure 11 The identical Figure 16 of details details.
Communication unit 340 may include various telecommunication circuits, wherein various telecommunication circuits include making electronic equipment 300 and outer At least one component that can be communicated between portion's equipment (for example, server).For example, communication unit 340 may include short-distance radio Communication unit 341, mobile comm unit 342 and broadcast reception unit 343.
Short-range communication unit 341 may include various short-distance wireless communication circuits, and such as but to be not limited to bluetooth logical Believe unit, Bluetooth Low Energy (BLE) communication unit, near-field communication (NFC) unit, WLAN (WLAN) (Wi-Fi) communication Unit, Zigbee communication unit, Infrared Data Association (IrDA) communication unit, Wi-Fi direct (WFD) communication unit, ultra wide band (UWB) communication unit and Ant+ communication unit, but not limited to this.
Mobile comm unit 342 may include various Mobile Communication Circuits, and Xiang Jizhan, outside on mobile communications network At least one of terminal and server sends wireless signal and connects from least one of base station, exterior terminal and server Receive wireless signal.Here, wireless signal may include according to voice call signal, video phone call signal or text/multimedia Message sends and receives the data with various formats.
Broadcast reception unit 343 may include various broadcast receiving circuits, and be received extensively by broadcast channel from external source Broadcast signal and/or information relevant to broadcast.Broadcast channel may include satellite channel or terrestrial broadcast channel.According to embodiment, Electronic equipment 300 can not include broadcast reception unit 343.
Communication unit 340 can receive at least one image from external equipment.Optionally, communication unit 340 can request outside Server sends Feature Selection Model and disaggregated model.Communication unit 340 can will be sent to the result that multiple images are classified outer Portion's server, and receive the Feature Selection Model and disaggregated model updated based on the result.
Output unit 350 may include various output circuits and output audio signal, vision signal or vibration signal, and It may include display 351, voice output unit (e.g., including sound out-put circuit) 352 and vibrating motor 353.Due to above 1 display 351 is described referring to Fig.1, therefore the details of display 351 is no longer provided.
Voice output unit 352 may include various sound out-put circuits, and export from communication unit 340 it is received or The audio data being stored in storage unit 380.In addition, the function that voice output unit 352 is exported and executed by electronic equipment 100 It can relevant voice signal.Voice output unit 352 may include such as, but not limited to loudspeaker or buzzer.
The exportable vibration signal of vibrating motor 353.For example, vibrating motor 353 is exportable with audio data or video data The corresponding vibration signal of output.In addition, when the touch screen is touched, the exportable vibration signal of vibrating motor 353.
Controller 330 may include the integrated operation of various processing circuits and controlling electronic devices 300.For example, controller 330 can run the program being stored in storage unit 380 to control communication unit 340, output unit 350, user input unit 360, sensing unit 320 and A/V input unit 370.
Input unit 360 may include the various input circuits for inputting the data for being used for controlling electronic devices 300.User The example of input unit 360 includes, but not limited to, e.g. keyboard, dome switch, touch tablet and (touches capacitive, piezoresistive film Type, infrared light detection type, surface ultrasonic conducting type, whole tonometry type or piezoelectric effect type), in idler wheel and rolling switch It is one or more, but not limited to this.
Sensor unit 320 may include various sensing circuits and/or sensor, be not only used for the biology letter of sensing user The sensor of breath, there are also the sensors of the state around the state or electronic equipment 300 for detecting electronic equipment 300.In addition, The information sensed by sensor can be sent to controller 330 by sensor unit 320.
Sensing unit 320 may include various sensors and/or sensing circuit, such as but be not limited to magnetic sensor 321, acceleration sensor 322, temperature/humidity sensor 323, infrared sensor 324, grip sensor 325, position sensor 326 (for example, global positioning systems (GPS)), atmospheric sensor 327, proximity sensor 328 and RGB (RGB) sensor 329 It is one or more in (illuminance transducer), but not limited to this.Because those skilled in the art can be based on each biography The title of sensor is intuitively inferred to the function of each sensor, so not describing the details of each sensor herein.
A/V input unit 370 may include various A/V input circuits and receive audio signal or vision signal, and can wrap It includes one or more in such as, but not limited to camera 371 and microphone 372.Under video telephone mode or screening-mode, Camera 371 can obtain the picture frame of still image or moving image via imaging sensor.Can by controller 330 or individually Image processor (not shown) handles the image via image capture sensor.
The picture frame handled by camera 371 can be stored in storage unit 380 or be sent by communication unit 340 To external device (ED).According to the embodiment of electronic device 300, at least two cameras 371 may be present.
Microphone 372 receives external voice signal and external voice signal is processed into electronic voice data.For example, wheat Gram wind 372 can receive voice signal from external device (ED) or narrator.Appointing in various noise remove algorithms can be used in microphone 372 A kind of what noise that algorithm removal is generated when receiving external voice signal.
Storage unit 380 can store the program of processing and the control for controller 330, and can store input/output Data (for example, using, the timeline information and address book of content, external device (ED)).
Storage unit 380 may include flash memory, hard disk, multimedia card micro memory, card-type memory (for example, safe number Word (SD) card or extreme digital (XD) card), random access memory (RAM), static random access memory (SRAM), read-only deposit Reservoir (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, Disk and CD at least one of type storage medium.In addition, electronic equipment 300 can be on the internet to executing storage The web page memory or Cloud Server of the store function of unit 380 are operated.
The program being stored in storage unit 380 can be classified into multiple modules based on function, and can be classified into use Family interface (UI) module 381, touch screen module 382 and notification module 383.
UI module 381 can provide link to the dedicated UI or GUI of electronic equipment 300 according to game application.Touch screen module The touch gestures of 382 detectable users on the touchscreen, and the information about touch gestures is sent to controller 330.
Touch screen module 382 can recognize and analyze touch code.Touch screen module 382 can be configured to include controller Independent hardware.
Notification module 383 can produce the signal for notifying the event in electronic equipment 300 to generate.By electronic equipment The example of 300 events generated includes call signal reception, message sink, key signals input and calendar notification.Notification module 383 Can by display 351 with video signal format export notification signal, by voice output unit 352 it is defeated with audio signal format Put up a notice signal or by vibrating motor 353 with vibration signal format export notification signal.
In addition, according to embodiment, electronic equipment 300 shown in electronic equipment 100 and Figure 16 shown in Figure 11 is only Example, and the component of electronic equipment 100 or electronic equipment 300 can be combined or element can be added to electronic equipment 100 Or electronic equipment 300 or element can be omitted from electronic equipment 100 or electronic equipment 300.In other words, if it is desirable, Then at least two components, which can be combined into a component or a component, can be divided at least two components.In addition, passing through group The function that part executes is only example, and detailed operation does not limit the scope of the present disclosure.
Process as described above can and according to being implemented via the computer program for using various computers to run by It is recorded in non-transitory computer-readable medium.Non-transitory computer readable recording medium may include program command, data At least one of file and data structure.The program command being recorded in non-transitory computer readable recording medium can be special It designs or known in the those of ordinary skill of computer software fields on door ground.Non-transitory computer readable recording medium is shown Example includes read-only memory (ROM), random access memory (RAM), CD-ROM, tape, floppy disk, optical data storage device Deng.The example of computer command is included the machine code prepared by compiler and can be executed by computer by using interpreter High-level language.
In addition, embodiment can be provided and being included in computer program product.Computer program product is can The product traded between sellers and buyer.
Computer program product may include the computer readable recording medium that software program and record have software program.Example Such as, computer program product may include by electronic market (for example, Google Play or Appstore) or patient monitoring devices Manufacturer's electronics publication software program form product (for example, Downloadable application).Electronics is issued, software program At least part can be stored in storage medium or can provisionally be generated.In this case, storage medium can be with Be the server of manufacturer, the server of electronic market or provisionally store software program Relay Server storage medium.
Computer program product may include the storage medium of the server in the system for including server and electronic equipment Or the storage medium of electronic equipment.Optionally, when presence is connected to the third equipment of server or electronic equipment (for example, intelligence Phone) when, computer program product may include the storage medium of third equipment.Optionally, computer program product may include from Server is sent to electronic equipment or is sent to the software program of third equipment or is sent to electronic equipment from third equipment Software program.
In this case, one of server, electronic equipment and third equipment can be held by running computer program product Row method according to the embodiment.Optionally, at least two in server, electronic equipment and third equipment can run computer journey Sequence product is to issue and execute method according to the embodiment.
For example, server (for example, Cloud Server or AI server) can run the computer program of storage in the server Product is connected to the electronic equipment execution method according to the embodiment of server to control.
As another example, third equipment can run computer program product to control and be connected to the electronics of third equipment and set It is standby to execute method according to the embodiment.When third equipment runs computer program product, third equipment can be downloaded from server Computer program product and the computer program product for running downloading.Optionally, third equipment can run the meter being pre-loaded Calculation machine program product is to execute method according to the embodiment.
Electronic equipment according to the embodiment can be by using Feature Selection Model and classification mould based on user data training Type is classified and is searched for multiple images by using the classification standard for user optimization.
During the picture search using electronic equipment according to the embodiment, user can need not remember all keywords And desired image is found in the case where inputting special key words, therefore convenience for users can be improved.
Electronic equipment according to the embodiment can automatically distribute the spy for being suitable for image via working in coordination with language model The keyword of sign is without distributing keyword by user.
In the case where describing various example embodiments referring to diagram, those skilled in the art will be managed Solution, can be in the example embodiment in the case where not departing from the spirit and scope of the present disclosure illustrated in the claims Make various changes in form and details.

Claims (15)

1. a kind of electronic equipment, comprising:
Display;
Memory is configured to store at a few instruction;And
Processor is configured as running at least one described instruction stored in memory,
Wherein, processor is configured as at least one described instruction of operation to promote electronic equipment to perform the following operation: obtaining more A image extracts depth characteristic about described multiple images using Feature Selection Model, using extraction depth characteristic and point Described multiple images are categorized into particular demographic by class model, show over the display classification as a result, using the classification knot Fruit determines whether the Feature Selection Model and/or the disaggregated model need to be updated, and trains based on the definitive result And update at least one model in the Feature Selection Model and the disaggregated model.
2. electronic equipment as described in claim 1, wherein processor is configured as at least one described instruction of operation to promote Electronic equipment stores described multiple images and the depth characteristic.
3. electronic equipment as claimed in claim 2, wherein processor is configured as at least one described instruction of operation to promote Electronic equipment stores the depth characteristic in exchangeable image file format (EXIF) about described multiple images.
4. electronic equipment as described in claim 1, wherein the Feature Selection Model includes first nerves network, and
The processor is configured at least one described instruction of operation is to promote electronic equipment by will be in described multiple images Each be input to first nerves network and carry out at least one layer of extracted vector from first nerves network,
It wherein, include the vector extracted about each of described depth characteristic of described multiple images.
5. electronic equipment as claimed in claim 4, wherein the disaggregated model includes nervus opticus network, nervus opticus network quilt Described multiple images are categorized into described by the similarity being configured between the depth characteristic about described multiple images Particular demographic.
6. electronic equipment as claimed in claim 5, wherein about similar between the depth characteristic of described multiple images Degree includes the degree of the similarity of determination and the difference between the vector for including in the depth characteristic, wherein similar The degree of degree reduces as the difference between vector increases, and the degree of similarity increases as the difference between vector reduces Greatly, and the difference between vector corresponding with the image for being classified into a group is in default range.
7. electronic equipment as described in claim 1, wherein processor is configured as at least one described instruction of operation to promote Electronic equipment performs the following operation: obtaining the first image, extracts the depth characteristic of the first image, the depth based on the first image is special Sign extracts at least one image among the described multiple images of classification, and shows described at least the one of extraction over the display A image.
8. electronic equipment as described in claim 1, wherein processor is configured as at least one described instruction of operation to promote Electronic equipment is periodically or based on received request training and updates in the Feature Selection Model and the disaggregated model At least one model.
9. electronic equipment as described in claim 1, wherein processor is configured as at least one described instruction of operation to promote Electronic equipment performs the following operation: when the disaggregated model is updated, using the depth characteristic of extraction and point of update Class model reclassifies the described multiple images of classification.
10. electronic equipment as described in claim 1, wherein processor is configured as at least one described instruction of operation to promote Perform the following operation electronic equipment: when the Feature Selection Model is updated, using update Feature Selection Model again The depth characteristic of the described multiple images about classification is extracted, and uses the depth characteristic and the disaggregated model extracted again The described multiple images of classification are reclassified.
11. electronic equipment as described in claim 1, wherein when in the Feature Selection Model and the disaggregated model extremely When a few model is updated, processor be configured as at least one described instruction of operation with promote electronic equipment be based on about The depth characteristic for the multiple images for including in the first group among the particular demographic that described multiple images are classified into will The multiple images for including in first group reclassify at least two groups.
12. electronic equipment as described in claim 1, when at least one of the Feature Selection Model and the disaggregated model When model is updated, processor is configured as at least one described instruction of operation to promote electronic equipment will be in described multiple images The multiple images for including in the first group and the second group among the particular demographic being classified into reclassify into one Group.
13. electronic equipment as described in claim 1, wherein processor is configured as at least one described instruction of operation to promote Electronic equipment is set to generate the group name of each of described particular demographic.
14. a kind of method for operating electronic equipment, which comprises
Obtain multiple images;
The depth characteristic about described multiple images is extracted using Feature Selection Model;
Described multiple images are categorized into particular demographic using the depth characteristic and disaggregated model of extraction;
Show the result of classification;
Determine whether the Feature Selection Model and/or the disaggregated model need to be updated using the result of the classification;With And
It trains based on the definitive result and updates at least one model in the Feature Selection Model and the disaggregated model.
15. a kind of computer program product has the computer readable recording medium of instruction including recording, wherein described instruction exists Electronic equipment is promoted to execute operation at least to perform the following operation when being run by processor:
Obtain multiple images;
The depth characteristic about described multiple images is extracted using Feature Selection Model;
Described multiple images are categorized into particular demographic using the depth characteristic and disaggregated model of extraction;
Show the result of classification;
Determine whether the Feature Selection Model and/or the disaggregated model need to be updated using the result of the classification;With And
It trains based on the definitive result and updates at least one model in the Feature Selection Model and the disaggregated model.
CN201880005869.1A 2017-01-03 2018-01-03 Electronic device and method of operating the same Active CN110168530B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR10-2017-0000789 2017-01-03
KR20170000789 2017-01-03
KR1020170136612A KR102428920B1 (en) 2017-01-03 2017-10-20 Image display device and operating method for the same
KR10-2017-0136612 2017-10-20
PCT/KR2018/000069 WO2018128362A1 (en) 2017-01-03 2018-01-03 Electronic apparatus and method of operating the same

Publications (2)

Publication Number Publication Date
CN110168530A true CN110168530A (en) 2019-08-23
CN110168530B CN110168530B (en) 2024-01-26

Family

ID=62917879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880005869.1A Active CN110168530B (en) 2017-01-03 2018-01-03 Electronic device and method of operating the same

Country Status (3)

Country Link
EP (1) EP3545436A4 (en)
KR (1) KR102428920B1 (en)
CN (1) CN110168530B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254742A (en) * 2021-07-14 2021-08-13 深圳市赛野展览展示有限公司 Display device based on 5G deep learning artificial intelligence
TWI754515B (en) * 2020-10-27 2022-02-01 大陸商深圳市商湯科技有限公司 Image detection and related model training method, equipment and computer readable storage medium

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102301720B1 (en) 2018-07-10 2021-09-10 주식회사 엘지에너지솔루션 Electrochemical capacitor and manufacturing method thereof
KR102135477B1 (en) * 2018-08-31 2020-07-17 엔에이치엔 주식회사 Method and system for image automatic classification
JP7225876B2 (en) 2019-02-08 2023-02-21 富士通株式会社 Information processing device, arithmetic processing device, and control method for information processing device
KR102259045B1 (en) * 2019-03-20 2021-05-31 박주복 Method and apparatus of generating virtual reality image
WO2020256339A1 (en) * 2019-06-18 2020-12-24 삼성전자주식회사 Electronic device and control method of same
KR20210048896A (en) * 2019-10-24 2021-05-04 엘지전자 주식회사 Detection of inappropriate object in the use of eletronic device
KR102144975B1 (en) * 2019-11-08 2020-08-14 주식회사 알체라 Machine learning system and method for operating machine learning system
CN112906724A (en) * 2019-11-19 2021-06-04 华为技术有限公司 Image processing device, method, medium and system
KR102293791B1 (en) * 2019-11-28 2021-08-25 광주과학기술원 Electronic device, method, and computer readable medium for simulation of semiconductor device
KR102476334B1 (en) * 2020-04-22 2022-12-09 인하대학교 산학협력단 Diary generator using deep learning
KR20210155283A (en) * 2020-06-15 2021-12-22 삼성전자주식회사 Electronic device and operating method for the same
KR102434483B1 (en) * 2020-12-17 2022-08-19 주식회사 알체라 Method for managing biometrics system and apparatus for performing the same
KR102479718B1 (en) * 2021-01-14 2022-12-21 대전대학교 산학협력단 Ai based image recognition and classification method for ar device, thereof system
KR20220107519A (en) * 2021-01-25 2022-08-02 주식회사 제네시스랩 Methods, Systems and Computer-Readable Medium for Learning Machine-Learned-Model Evaluating Plural Competency
KR102422962B1 (en) * 2021-07-26 2022-07-20 주식회사 크라우드웍스 Automatic image classification and processing method based on continuous processing structure of multiple artificial intelligence model, and computer program stored in a computer-readable recording medium to execute the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335712A (en) * 2015-10-26 2016-02-17 小米科技有限责任公司 Image recognition method, device and terminal
WO2016077834A1 (en) * 2014-11-14 2016-05-19 Zorroa Corporation Systems and methods of building and using an image catalog
CN106104577A (en) * 2014-03-07 2016-11-09 高通股份有限公司 Photo management

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100738080B1 (en) * 2005-11-08 2007-07-12 삼성전자주식회사 Method of and apparatus for face recognition using gender information
US20110169982A1 (en) * 2010-01-13 2011-07-14 Canon Kabushiki Kaisha Image management apparatus, method of controlling the same, and storage medium storing program therefor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106104577A (en) * 2014-03-07 2016-11-09 高通股份有限公司 Photo management
WO2016077834A1 (en) * 2014-11-14 2016-05-19 Zorroa Corporation Systems and methods of building and using an image catalog
CN105335712A (en) * 2015-10-26 2016-02-17 小米科技有限责任公司 Image recognition method, device and terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI754515B (en) * 2020-10-27 2022-02-01 大陸商深圳市商湯科技有限公司 Image detection and related model training method, equipment and computer readable storage medium
CN113254742A (en) * 2021-07-14 2021-08-13 深圳市赛野展览展示有限公司 Display device based on 5G deep learning artificial intelligence
CN113254742B (en) * 2021-07-14 2021-11-30 深圳市赛野展览展示有限公司 Display device based on 5G deep learning artificial intelligence

Also Published As

Publication number Publication date
EP3545436A1 (en) 2019-10-02
KR102428920B1 (en) 2022-08-04
CN110168530B (en) 2024-01-26
EP3545436A4 (en) 2020-05-06
KR20180080098A (en) 2018-07-11

Similar Documents

Publication Publication Date Title
CN110168530A (en) Electronic equipment and the method for operating the electronic equipment
US10970605B2 (en) Electronic apparatus and method of operating the same
CN111652678B (en) Method, device, terminal, server and readable storage medium for displaying article information
WO2021180062A1 (en) Intention identification method and electronic device
US11783191B2 (en) Method and electronic device for providing text-related image
CN110168603A (en) For the method and its equipment by equipment calibration image
CN106462598A (en) Information processing device, information processing method, and program
CN111339246A (en) Query statement template generation method, device, equipment and medium
KR20180055708A (en) Device and method for image processing
CN111567056B (en) Video playing device and control method thereof
US20210264106A1 (en) Cross Data Set Knowledge Distillation for Training Machine Learning Models
CN110100253A (en) Electronic equipment and its operating method
KR102628042B1 (en) Device and method for recommeding contact information
JP7277611B2 (en) Mapping visual tags to sound tags using text similarity
CN114564666B (en) Encyclopedia information display method, device, equipment and medium
CN111814475A (en) User portrait construction method and device, storage medium and electronic equipment
US20190012347A1 (en) Information processing device, method of processing information, and method of providing information
KR20180072534A (en) Electronic device and method for providing image associated with text
CN105849758A (en) Multi-modal content consumption model
US20200051559A1 (en) Electronic device and method for providing one or more items in response to user speech
KR20200085143A (en) Conversational control system and method for registering external apparatus
CN108509567B (en) Method and device for building digital culture content library
CN115131604A (en) Multi-label image classification method and device, electronic equipment and storage medium
CN113935332A (en) Book grading method and book grading equipment
US11615327B2 (en) Artificial intelligence device for providing search service and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant