CN107239203A - A kind of image management method and device - Google Patents
A kind of image management method and device Download PDFInfo
- Publication number
- CN107239203A CN107239203A CN201611007300.8A CN201611007300A CN107239203A CN 107239203 A CN107239203 A CN 107239203A CN 201611007300 A CN201611007300 A CN 201611007300A CN 107239203 A CN107239203 A CN 107239203A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- interested region
- region
- equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This application discloses a kind of image management method and device.Method includes:Detect that user is directed to the operation of image;Based on the user-interested region in the operation and described image, described image is managed.Interest management image of the embodiment of the present invention based on user, can truly hold user's request, improve image management efficiency.
Description
Technical field
The present invention relates to technical field of image processing, in particular, it is related to a kind of image management method and device.
Background technology
Lifting and cost with smart machine hardware production capacity decline, and performance of taking pictures, memory capacity have significantly
Lifting, this results in smart machine end and stores substantial amounts of image, and user is browsed to these images and retrieves, shares and manage
Demand it is also gradually strong.
In the prior art, it is based primarily upon time dimension image browsing.In browser interface, during user's switching image, institute
There is image to show in front of the user with time sequencing.
However, ignoring the point of interest of user based on time dimension image browsing.
The content of the invention
The application proposes a kind of image management method and device.The technical scheme of the application is as follows:
According to the one side of embodiment of the present invention, a kind of image management method, including:
Detect that user is directed to the operation of image;
Based on the user-interested region in the operation and described image, described image is managed.
According to the one side of embodiment of the present invention, a kind of image management apparatus, including:
Detection module is operated, for detecting that user is directed to the operation of image;
Management module, for based on the user-interested region in the operation and described image, managing described image.
Using embodiment of the present invention, detect that user is directed to the operation of image first, then based on the use in operation and image
Family interest region, manages image.It can be seen that, management image where interest of the embodiment of the present invention based on user can true handle
User's request is held, image management efficiency is improved.
Brief description of the drawings
Fig. 1 is the image management method flow chart according to embodiment of the present invention.
Fig. 2A is the flow chart that image attributes list is obtained according to embodiment of the present invention.
Fig. 2 B are according to embodiment of the present invention, the zone list schematic diagram of image.
Fig. 3 is that manual focus determines the schematic diagram of user-interested region according to embodiment of the present invention.
Fig. 4 is, according to embodiment of the present invention, user-interested region to be determined based on viewpoint thermal map and/or conspicuousness thermal map
Schematic diagram.
Fig. 5 A- Fig. 5 D are, according to embodiment of the present invention, the exemplary of user-interested region to be determined based on conspicuousness view
Schematic diagram.
Fig. 6 A are that, according to embodiment of the present invention, object detection carries the schematic diagram of class label.
Fig. 6 B are the schematic diagram for producing class label based on object classification device according to embodiment of the present invention.
Fig. 6 C are the combination schematic diagram of thermal map detection and image classification according to embodiment of the present invention.
Fig. 7 is, according to embodiment of the present invention, the flow chart of fast browsing to be carried out in picture browsing.
Fig. 8 is the flow chart that personalized tree structure is realized according to embodiment of the present invention.
Fig. 9 is the flow chart that personalization categories classification is realized according to embodiment of the present invention.
Figure 10 is the flow chart that different transmission mode is selected according to embodiment of the present invention.
Figure 11 is that, according to embodiment of the present invention, user actively initiates to share the flow chart of image.
Figure 12 A- Figure 12 B are that, according to embodiment of the present invention, user carries out the stream of images share when using social software
Cheng Tu.
Figure 13 A- Figure 13 G are the fast browsing schematic diagram at picture browsing interface according to embodiment of the present invention.
Figure 14 A- Figure 14 C are the fast browsing schematic diagram based on multiple image according to embodiment of the present invention.
Figure 15 A- Figure 15 C are the fast browsing schematic diagram in video according to embodiment of the present invention.
Figure 16 is the fast browsing schematic diagram under camera preview interface according to embodiment of the present invention.
Figure 17 is the first demonstrative structure figure of personalized tree structure according to embodiment of the present invention.
Figure 18 is the second demonstrative structure figure of personalized tree structure according to embodiment of the present invention.
Figure 19 is according to embodiment of the present invention, to the fast browsing schematic diagram of tree structure on mobile terminal.
Figure 20 is according to embodiment of the present invention, to the fast browsing flow chart of tree structure on small screen device.
Figure 21 A- Figure 21 B are that according to embodiment of the present invention, the fast browsing of tree structure is illustrated on small screen device
Figure.
Figure 22 is according to embodiment of the present invention, to the display schematic diagram of image on small screen device.
Figure 23 is the transmission mode schematic diagram in the case of different transmission quantities according to embodiment of the present invention.
Figure 24 is the transmission mode schematic diagram in the case of different network environments according to embodiment of the present invention.
Figure 25 is, according to embodiment of the present invention, the first schematic diagram of image to be shared under thumbnail interface.
Figure 26 A- Figure 26 C are, according to embodiment of the present invention, the second schematic diagram of image to be shared under thumbnail interface.
Figure 27 is to share schematic diagram according to embodiment of the present invention, first under chat interface.
Figure 28 is to share schematic diagram according to embodiment of the present invention, second under chat interface.
Figure 29 is according to embodiment of the present invention, by the image method for concentrating schematic diagram of image to word.
Figure 30 is according to embodiment of the present invention, by the image method for concentrating schematic diagram of word to image.
Figure 31 is according to embodiment of the present invention, the image transition diagram of image content-based.
Figure 32 is that, according to embodiment of the present invention, the intelligence of image content-based deletes schematic diagram.
Figure 33 is the image management apparatus structure chart according to embodiment of the present invention.
Embodiment
In order that the purpose of the application, technological means and advantage are more clearly understood, the application is done below in conjunction with accompanying drawing
It is further described.
Embodiment of the present invention proposes a kind of image management method based on content, main to include being based on user-interested region
Image is carried out fast browsing, retrieval, Adaptive Transmission, personalization files tissue, it is quick share and the management such as delete operate.
Embodiment of the present invention can be applied in the management application of the photograph album of smart machine, or be applied to the phase in high in the clouds
In volume management application, etc..
Fig. 1 is the image management method flow chart according to embodiment of the present invention.
As shown in figure 1, this method includes:
Step 101:Detect that user is directed to the operation of image.
Step 102:Based on the user-interested region (Region of Interest, ROI) in operation and image, management figure
Picture.
User-interested region can be the region with specific meanings in image.
In one embodiment, the user-interested region in step 102 can be determined by following at least one mode:
Manual focus position in mode (1), the imaging process of detection image, by corresponding to the image of manual focus position
Region is defined as user-interested region.
In the stage of taking pictures, the region high probability of user's manual focus for user region interested, therefore can will be right
User-interested region should be defined as in the image-region of manual focus position.
Auto-focusing position in mode (2), the imaging process of detection image, by corresponding to the image of auto-focusing position
Region is defined as user-interested region.
In the stage of taking pictures, the region of camera automatic focusing also may be user region interested, therefore can be with
Image-region corresponding to manual focus position is defined as user-interested region.
Object area in mode (3), detection image, user-interested region is defined as by object area.
Herein, object area both can be personage or animal, plant, the vehicles, scenic spot or building
Thing etc..Other pixel regions compared in image, object area may be user region interested, therefore can be by
Object area is defined as user-interested region.
Viewpoint thermal map region in mode (4), detection image, user-interested region is defined as by viewpoint thermal map region.
Herein, the region that user often pays close attention to when viewpoint thermal map region is image browsing.Viewpoint thermal map region relatively has can
It can be user region interested, therefore viewpoint thermal map region can be defined as user-interested region.
Conspicuousness thermal map region in mode (5), detection image, user interest area is defined as by conspicuousness thermal map region
Domain.
Herein, conspicuousness thermal map region is that have obvious vision difference with other regions, be easy to so that observer produces
The region of interest, can be defined as user-interested region by conspicuousness thermal map region.
In one embodiment, can based on manual focus, auto-focusing, viewpoint thermal map thermal map, object detection, significantly
Property thermal map detection etc. various ways determine user-interested region set.Then, according to predetermined ranking factor, to user interest area
User-interested region in the set of domain is ranked up, and one or more final user-interested regions are determined based on ranking results.Tool
Body, predetermined ranking factor is included:Source priority;Positional priority;Class label priority;Classification confidence priority;
Browse frequency priority, etc..
In one embodiment, when subsequently to user's display image, the row for user's area-of-interest that image is included
Sequence result can influence the priority of correspondence image.Such as, including the image for the forward area-of-interest that sorts can have
Higher priority, so as to preferentially be shown to user.
More than it is exemplary describe the concrete mode for determining the user-interested region in image, those skilled in the art can be with
, it is realized that this description is only exemplary, the protection domain of embodiment is not intended to limit the present invention.
In one embodiment, this method also includes:Generate the class label of user-interested region.Class label is used for
Classification belonging to instruction user interest region.Preferably, can be in detection image object area when, based on object area inspection
Survey result generation class label.Or, user-interested region can be inputted object classification device, the output based on object classification device
As a result class label is generated.
In one embodiment, it is determined that after user-interested region, this method also includes:
The zone list of image is generated, zone list includes the area field corresponding to user-interested region, and region
Field includes the class label of user-interested region.User-interested region in image can be one or more.Correspondingly, region
Area field in list can also be one or more.Preferably, area field can also include:Source (such as, area field
From which image);Position (such as, the coordinate position of area field in the picture);Classification confidence;Browse frequency, etc..
It is exemplary above to describe the specifying information that area field is included, it will be appreciated by those of skill in the art that this
It is only exemplary to plant description, is not intended to limit the present invention the protection domain of embodiment.
Fig. 2A is the flow chart that image attributes list is obtained according to embodiment of the present invention.
When setting up image attributes list, it is necessary to consider the attribute information of full figure attribute information and each area-of-interest.
Full figure attribute information can include the classification results of full figure, such as scene type.As shown in Figure 2 A, input picture first, to complete
Figure is classified, and obtains classification results.In addition it is also necessary to be detected to the area-of-interest in image, the step is mainly used
Area-of-interest in extraction image.The two steps are detected by full figure classification and area-of-interest, image can be set up
Attribute list.Wherein, (hereinafter referred to as region is arranged for the list of classification results of the image attributes list comprising full figure and area-of-interest
Table).
Fig. 2 B are according to embodiment of the present invention, the zone list schematic diagram of image.
As shown in Figure 2 B, image includes two user-interested regions, respectively one people's object area and a pet region.
Accordingly, the zone list of the image includes two area fields corresponding to respective user-interested region.Each region word
Section is respectively comprising the image sources of user-interested region, user-interested region position in the picture, the class of user-interested region
(if image-region includes the ID that people should include people), user-interested region do not belong to the confidence level of certain classification, browse frequency, etc.
Deng.
Below to determining that the process of user-interested region is described in detail based on manual focus mode.
Fig. 3 is that manual focus determines the schematic diagram of user-interested region according to embodiment of the present invention.
As shown in figure 3, when equipment is under screening-mode or image pickup mode, detected whether that user's manual focus is acted,
If monitoring that user's manual focus is acted, record manual focus position, and interception corresponds to manual focus position from image
Particular area, the particular area is defined as user-interested region.
Wherein:The strategy of particular area is intercepted from image to be included:
(1), intercepted according to predetermined parameter.These parameters can include length-width ratio, the ratio for accounting for total image area, fixation
The length of side etc..
(2), intercepted according to image vision information automation.For example, first according to color segmentation image, further according to focus
Color intercepts the divided region of approximate color.
(3), carry out object detection in the picture first, judge user's focusing position fall object area, then by the thing
Body region is intercepted as user-interested region.
Below to determining that the process of user-interested region is described in detail based on viewpoint thermal map or conspicuousness thermal map.
Fig. 4 is, according to embodiment of the present invention, user-interested region to be determined based on viewpoint thermal map and/or conspicuousness thermal map
Schematic diagram.
As shown in figure 4, input picture first, then generates viewpoint thermal map and/or conspicuousness thermal map one by one for image.Connect
, the point more than predetermined threshold is looked for whether in viewpoint thermal map and/or conspicuousness thermal map.If so, then by the conduct
The starting point of one point set, is added to the point by the focus that the adjacent and energy with the point set is more than threshold value and concentrates, until the point set
Untill the focus for being nearby more than threshold value without energy, and by the energy zero setting of these focuses, constantly repeatedly above procedure until regarding
Untill no longer there is the point more than threshold value in point thermal map and/or conspicuousness thermal map.Each point set constitutes a user-interested region.
Fig. 5 A- Fig. 5 C are, according to embodiment of the present invention, the exemplary of user-interested region to be determined based on conspicuousness view
Schematic diagram.
Fig. 5 A show input picture.Fig. 5 B are the conspicuousness thermal map corresponding to input picture.In figure 5b, each point is got over
Bright its energy that represents is higher, and more dark then energy is lower.It is determined that during user-interested region, the point A in Fig. 5 B is chosen for first
Starting point, thus starts, and the point that the bright spot around it can all be added into by starting point of point A is concentrated, and these points are zeroed out, such as
Shown in Fig. 5 C.Similarly, above procedure is repeated, user-interested region is extracted by the point B in Fig. 5 B.Final user interest
Area results are as shown in Figure 5 D.
The process of the class label to generating user-interested region is described in detail below.
Fig. 6 A are that, according to embodiment of the present invention, object detection carries the schematic diagram of class label.In fig. 6, illustrate
The flow of the zone list of the class label with object is generated according to object detection.
As shown in Figure 6A, input picture first, then carries out object detection to input picture.Moreover, by the thing detected
Body is set to user-interested region, and the category result based on object detection is that user-interested region generates class label.
Fig. 6 B are the schematic diagram for producing class label based on object classification device according to embodiment of the present invention.
In fig. 6b, user-interested region is inputted into object classification device.When object classification device detects user-interested region
Classification when, be that user-interested region generates class label based on category result, and generate and include the zone list of class label;
When object classification device does not detect the classification of user-interested region, zone list of the generation not comprising class label.
In some algorithms, thermal map detection (including viewpoint thermal map and/or conspicuousness thermal map) can be mutual with image classification
With reference to.Fig. 6 C are the combination schematic diagram of thermal map detection and image classification according to embodiment of the present invention.
As shown in Fig. 6 A- Fig. 6 C, when input picture, image is by shared convolutional neural networks layer, for full figure point
The convolutional neural networks object classification branch of class and the convolutional neural networks detection branches detected for conspicuousness, while obtaining complete
The classification results and salient region testing result of figure.It is then detected that the salient region gone out is transported to convolutional Neural checking
Object classification is carried out in network, is finally merged classification results to obtain the final classification result of image, and generate process
Sorted user-interested region.
After sorted user-interested region is produced, these user-interested regions can be ranked up, the base of sequence
Standard is it is contemplated that the source in region, user-interested region belongs to the confidence level of certain classification, user-interested region is entered browses frequency
Rate etc..For example, the order that can be detected based on manual focus, viewpoint thermal map, object detection and conspicuousness, pair from high to low
User-interested region is ranked up.Finally, based on ranking results, one or more the user interest areas chosen can finally be determined
Domain.
, can be based on user-interested region in figure after the user-interested region that image is determined by foregoing detailed description
As browsing and retrieve, the personalization categories definition of image organizational structure, user's photograph album and precise classification, image transmitting, quick point
Enjoy, image is selected and many aspects such as image-erasing, and polytype concrete application is embodied.
(1), in terms of picture browsing and retrieval.
In a practical situation, user for each image fancy grade and to browse frequency be different.When a sub-picture
In comprising user's target interested when, diagram picture can be browsed more frequently.When multiple images are interested all comprising user
Target when, these images browse frequency also due to each side reason and browse frequency with different.Therefore, in displaying
It is necessary the individual character of consideration user during candidate image.Moreover it is necessary to provide the user with a kind of many image multiple target multioperations
Solution, and then lift the usage experience of user.Further, how on the mobile device (such as wrist-watch) with more the small screen
Display image is that prior art is not accounted for, if simply in proportion by image down, the details of image will be ignored.
Now need to obtain the region more paid close attention in a sub-picture of user and user is showed on compared with the small screen.In addition, in photograph album
In there is great amount of images in the case of, based on user-interested region user can be made rapidly to be browsed to image.
Fig. 7 is, according to embodiment of the present invention, the flow chart of fast browsing to be carried out in picture browsing.
As shown in fig. 7, equipment is first it is detected that user's image browsing in photograph album, then equipment is arranged according to area-of-interest
Table obtains area-of-interest position, and it is interactive to point out user to be carried out with area-of-interest.When equipment detects user in sense
After operation on interest region, equipment produces corresponding picture search criterion according to the different operating of user, and is searched in photograph album
The image of Suo Fuhe search criterias, is shown to user.In one embodiment, the operation in step 101 is included at least two
The selection operation of individual user-interested region, wherein at least two user-interested region belongs to same piece image or belongs to different figures
Picture;Image is managed in step 102, including:
Based on the selection operation at least two user-interested regions, there is provided corresponding image and/or frame of video.
Such as, the image searched out, can include has the user of same category emerging with least two user-interested regions
Interesting region, or comprising the user-interested region with least one at least two user-interested regions with same category, or
Person do not include have with least two user-interested regions same category user-interested region image, or not comprising with extremely
At least one in few two user-interested regions has user-interested region of same category, etc..
Specifically, search criteria includes at least one in following:
(A), when selection operation be the first kind selection operation when there is provided corresponding image and/or frame of video in include:The
The corresponding user-interested region of all user-interested regions that one class selection operation is directed to.Such as, first kind selection operation is used for
Determine the essential option of search result.
Such as, when user wishes search while when including the image of aircraft and automobile, two images, a figure can be found out
As comprising aircraft, another image includes automobile.Moreover, user selects aircraft and automobile respectively from this two images, will
Aircraft and automobile as search result essential option, you can to carry out fast search, obtain all simultaneously comprising aircraft and automobile
Image.Alternatively, user can also select the essential option of search result from the same piece image comprising aircraft and automobile.
(B), when selection operation be Equations of The Second Kind selection operation when there is provided corresponding image and/or frame of video in include:The
The corresponding user-interested region of at least one user-interested region that two class selection operations are directed to.Such as, Equations of The Second Kind selection operation
Option for determining search result.
Such as, when user wishes image of the search comprising aircraft or automobile, two images, an image bag can be found out
Containing aircraft, another image includes automobile.User selects aircraft and automobile, regard aircraft and automobile as the optional of search result
, you can to carry out fast search, obtain all images comprising aircraft or automobile.Alternatively, user can also be from comprising winged
The option of search result is selected in the same piece image of machine and automobile.
(C), when selection operation be the 3rd class selection operation when there is provided corresponding image and/or frame of video in do not include:
The corresponding user-interested region of user-interested region that 3rd class selection operation is directed to.Such as, the 3rd class selection operation is used for true
Determine the not option of search result.
Such as, when user wishes to search for the image for both not including aircraft and not including automobile, two images can be found out,
One image includes aircraft, and another image includes automobile.User selects aircraft and automobile from this two images respectively, will
Aircraft and automobile as search result not option, you can to carry out fast search, obtain and both do not included aircraft all and do not wrapped
Image containing automobile.Alternatively, user can also select search result not from the same piece image comprising aircraft and automobile
Option.
In one embodiment, operation is included in the selection operation of user-interested region and/or retrieval in step 101
Hold input operation;Wherein the input operation of retrieval content includes word input operation and/or phonetic entry is operated.Managed in step 102
Image is managed, including:Based on selection operation and/or retrieval content input operation, there is provided corresponding image and/or frame of video.
Such as, the image searched out, can include has same category and classification information and retrieval with user-interested region
The user-interested region that content matches, or comprising there is same category or classification information and retrieval content with user-interested region
The user-interested region matched, or not comprising there is same category and classification information and retrieval content phase with user-interested region
The user-interested region of matching, or not comprising there is same category or classification information and retrieval content phase with user-interested region
User-interested region matched somebody with somebody, etc..
Specifically, search criteria includes at least one in following:
(A), when retrieve content input operation be the first kind retrieve content input operation when there is provided corresponding image and/
Or included in frame of video:The corresponding user-interested region of all user-interested regions that first kind selection operation is directed to.Such as,
One class selection operation is used for the essential option for determining search result.
Such as, when user wishes search while when including the image of aircraft and automobile, one can be found out and include aircraft
Image, user selects aircraft, and user's word or phonetic entry " automobile " from this image, using aircraft and automobile as
The essential option of search result, you can to carry out fast search, obtains all the image comprising aircraft and automobile simultaneously.
(B), when retrieve content input operation be Equations of The Second Kind retrieve content input operation when there is provided corresponding image and/
Or included in frame of video:The corresponding user-interested region of at least one user-interested region that Equations of The Second Kind selection operation is directed to.Than
Such as, Equations of The Second Kind selection operation is used for the option for determining search result.
Such as, when user wishes image of the search comprising aircraft or automobile, an image for including aircraft can be found out,
User selects aircraft, and user's word or phonetic entry " automobile " from this image, regard aircraft and automobile as search
As a result option, you can to carry out fast search, obtains the image comprising aircraft or automobile.
(C), when retrieve content input operation be the 3rd class retrieve content input operation when there is provided corresponding image and/
Or do not include in frame of video:The corresponding user-interested region of user-interested region that 3rd class selection operation is directed to.Such as, the 3rd
Class selection operation is used for the not option for determining search result.
Such as, when user wishes to search for the image for both not including aircraft and not including automobile, one can be found out and included
The image of aircraft, user selects aircraft, and user's word or phonetic entry " automobile " from this image, by aircraft and vapour
Car as search result not option, you can to carry out fast search, obtain and all both do not included aircraft and not comprising automobile
Image.
In one embodiment, it is in following at least one mould to the selection operation of user-interested region in step 101
Detected under formula:Camera preview mode;Picture browsing pattern;Thumbnail browse mode, etc..
It can be seen that, by inquiring about the image associated with user-interested region, embodiment of the present invention can facilitate user fast
Speed browses and retrieved image.
When showing fast browsing or the image retrieved, the priority of image to be shown can be first determined, according to figure
The priority of picture determines the display order of image, user is preferentially seen that best suiting user browses and retrieve the image being intended to, and carries
The experience that high user browses and retrieved.
Specifically, the determination of picture priority can be realized by following criterion:
(A) related data counted in full figure aspect, such as shooting time, place, number of visits, number of times etc. is shared,
Related data further according to statistics determines picture priority.
In one embodiment, the item data in the related data of statistics in above-mentioned full figure aspect can individually be considered
To determine picture priority.Such as, shooting time and the closer picture priority of current time are higher, or consider current time
Particularity, such as red-letter day, commemoration day, the image matched with the particularity of current time should have higher priority;
Spot for photography is higher closer to person's priority with current device site;The more picture priorities of user's number of visits are more
It is high/low;Share the more picture priorities of number of times more high/low, etc..
In one embodiment, wherein many item datas can be considered to determine picture priority.Such as, Ke Yili
Calculate priority with weight score, it is assumed that shooting time and current time at intervals of t, spot for photography and current device location
The distance of point is d, and user's number of visits is v, shares number of times for s, in order that obtaining each item data has comparativity, by these data
It is normalized, obtains t ', d ', v ', s ', wherein t ', d ', v ', s ' ∈ [0,1] can be obtained by the following formula preferential
Level score (priority):
Priority=α t '+β d '+γ v '+μ s '
Wherein α, β, γ, μ are the weight of each item data, and they are used for determining the significance level of each item data, they
Value can be preset, can also be by user's sets itself, can also be according to change, material time point of user's attentinal contents etc.
Information is automatically adjusted, and when such as current point in time is the material time point that red-letter day or user set, automatically can be increased weight α
Greatly, when statistics obtain user browse the number of times of pets image considerably beyond the image for browsing other classifications number of times when, indicate
The content that user currently pays close attention to is pets picture material, can now be increased the weight γ of pets image.
(B) related data counted in object aspect, for example, manual focus position, viewpoint thermal map, object classification confidence
Degree etc..Picture priority is determined according to the related data of statistics.
In one embodiment, picture priority is determined using manual focus position.User is when being shot, manually
The focus focused is generally the area-of-interest of user.Examined on the manual focus position and the position of equipment record user
The object measured, then the image comprising the object is with higher priority.
In one embodiment, picture priority is determined using viewpoint thermal map.Viewpoint thermal map refers on an image
The attention rate of counting user, in the concern number of times and/or residence time of each pixel or object space counting user sight, is used
The concern number of times at family is more and/or the sight residence time is longer, and the image comprising the object on the position should have higher
Priority.
In one embodiment, picture priority is determined using object classification confidence level.Each object in image
Classification confidence reflects the possibility that an area-of-interest belongs to a certain classification object, and confidence level is higher to represent that this is interested
The probability that region belongs to a certain classification object is higher, includes the image of high confidence object and should have higher priority.
, can also be similar with considering every data in full figure aspect except individually considering each data above, synthesis is examined
Consider each item data in object aspect to determine picture priority.
(C) except independently investigating each object, the relation between each object can also be investigated.According to each object it
Between relation determine picture priority.
In one embodiment, picture priority is determined using the semantic combination of object.The semantic meaning of single body
It can narrowly be used for scanning in photograph album, i.e. multiple objects in user's selection image, equipment, which is returned, includes identical
The image of object.On the other hand, the combination of multiple objects can be with abstract for more broadly semantic meaning, such as " people " and " birthday
The combination of cake " can be with abstract for " birthday party ", and " birthday party " not necessarily includes " birthday cake ", it is possible thereby to pass through
Object type is other to be combined to search for more abstract semantic concept, and also the classification results of object and the classification results of full figure are associated with
Get up.Can be by pre-defined realization, such as by " people " by the conversion of semantic classes to the upper strata abstract class of multiple objects
The combination of " birthday cake " is defined as " birthday party ";It can also be realized by way of machine learning, by image inclusion
It is that can include N kind objects in a characteristic vector, such as image that the situation of body is abstract, then just can by a N-dimensional to
Measure to describe piece image, then by way of supervised learning or unsupervised learning, sort images into different classifications.
In one embodiment, picture priority is determined using object relative position.Except semantic information, the phase of object
It can be used for setting the priority of image to position.For example, user is when choosing area-of-interest, object A and object B is chosen
In, and object A is located at object B left side, then in retrieval result, the image that object A is located on the left of object B should have more
High priority.Secondly the arrangement criterion of priority can also be provided by more accurate numerical information, such as in user institute
In the image of operation, object A to object B distance can use vectorRepresent, then in the image searched, object A with
Object B distance isThen priority row can be carried out to the image searched by calculating the difference of two distance vectors
Sequence.
(2), in image organizational configuration aspects.
In image organizational, image can be polymerize and be separated according to the attribute list of image, it is established that tree-like knot
Structure.Fig. 8 is the flow chart that personalized tree structure is realized according to embodiment of the present invention.Equipment is first it is detected that build tree
The trigger condition of shape structure, such as amount of images reaches threshold value, user's triggering manually;Then each image in photograph album is extracted
Attribute list, the classification information (classification of full figure and/or the classification of area-of-interest) in the attribute list of each image
Some set are divided an image into amount of images, each set is a node in tree structure;It is possible if desired to
Inside each set, further dividing subset;Equipment is operated according to user, and the image belonged on each node is shown into use
Family.In tree structure, one classification of a node on behalf in each level is more abstract closer to root node classification, more leans on
Nearly leaf node classification is more specific.Leaf node is a specific user-interested region or image.
According to the image distribution in different user photograph album, personalized adjustment can also be carried out to tree structure.Such as, one
Comprising many images about the vehicles in individual user A photograph album, and have in another user B photograph album comprising fewer
Close the image of the vehicles, then the tree of the relevant vehicles should be with more levels in user A photograph album, and user
B should have less level.User freely can be switched fast between each level, to reach the mesh of fast browsing
's.
In one embodiment, user-interested region is based in step 102, image is managed, including:Display has tree-like
The thumbnail of structure;And/or, complete graph of the display with tree structure.
In one embodiment, the generating mode of tree structure includes:Based on polymerization processing operation, to comprising with phase
The image of the user-interested region of similar distinguishing label carries out polymerization processing;Based on separate processing operation, to comprising with inhomogeneity
The image of the user-interested region of distinguishing label carries out separating treatment;Based on tree structure operation is set up, handle and/or divide for polymerization
The tree structure for including hierarchical relationship is set up from the image after processing.
In one embodiment, this method also include it is following at least one:
It is right when the leaf node number positioned at tree structure identical layer exceedes predetermined threshold based on classification fractured operation
Identical layer carries out classification deconsolidation process;Based on the first kind trigger action for selected layer in tree structure, with breviary diagram form
Display belongs to the image of selected layer;It is aobvious in complete graph form based on the Equations of The Second Kind trigger action for selected layer in tree structure
Show the image for belonging to selected layer;Based on the 3rd class trigger action for selected layer in tree structure, show below selected layer
Level;Based on fourth class trigger action of the user for selected layer in tree structure, level above selected layer is shown;Based on use
Family shows all images that selected layer is included, etc. for the 5th class trigger action of selected layer in tree structure.
It can be seen that, embodiment of the present invention is optimized based on user-interested region to image organizational structure, at various interfaces
On, user can be switched fast between each level, reach the quick purpose for checking image.
(3), the personalization categories definition of user's photograph album and precise classification.
User is when carrying out individualized album management, it is necessary to which the area-of-interest included to image and image carries out personalization
Class declaration, such as be " my paintings " by one group of image definition, and for example by the region definition comprising dog in a collection of image
For " my pet dog ".
Illustrate personalization categories definition and the precise classification of user's photograph album by taking image classification as an example below, for region of interest
Domain can realize personalization categories definition and precise classification using similar operation and technology.
Existing photograph album management product is all that user passively participates in, and which type of management strategy product provides completely by developing
Personnel determine, in order that the wider user group of product adaptation, the management strategy of developer's setting is often universalization,
Therefore existing photograph album management function not fully meets the individual demand of user.
In addition, high in the clouds and mobile device are separate for the classification results of image in existing product, and both combinations
The accuracy of photograph album management, intelligent and personalization can be lifted.Compared with mobile device, cloud server has more powerful
Calculate and storage capacity, increasingly complex algorithm can be used to realize every demand of user, it is therefore desirable to more reasonable land productivity
With every resource in high in the clouds, so as to provide the user with more preferable experience.
Fig. 9 is the flow chart that personalization categories classification is realized according to embodiment of the present invention.Equipment is grasped according to user first
Make definition personalization categories, realizing the classification of personalization categories can be realized by local and two kinds of high in the clouds solution, so as to
To update the model that local, high in the clouds carries out personalized classification, the classification results of the model after both update finally by fusion are obtained
To accurate personalization categories classification results.
In order to meet demand of the user to personalization categories, it is necessary first to determine the definition of personalization categories.Personalized class
The method not defined can include it is following at least one:
(A), user actively explicitly defines, i.e. which kind of which image of annunciator should be denoted as.For example equipment gives every
Image distributes an attribute list, and user can add item name in this attribute list, and the number of classification can be one
It is individual or multiple.Equipment is that the item name that user adds distributes a unique symbol, by with the identical different images uniquely accorded with
It is classified as same class.
(B) class declaration, is completed to the natural operation of photograph album according to user.Photograph of such as user in oneself photograph album is arranged
During piece, one group of image is moved in a file.Now operation of the equipment according to user to photograph album, judges that this group of photo is
The classification of user individual, when there is follow-up photo to occur, it is necessary to whether judge this photo with reorganizing photo for same category,
If it is this image is shown in the file of user's foundation automatically, or is prompted to whether user is shown to image
In the file that user sets up.
(C), completion class declaration is operated naturally according to other of user in equipment.Such as user is using social software
When, equipment shares operation by analyze user, and the image in user's photograph album is defined into personalization categories according to social networks.It is logical
Analysis user behavior in social software is crossed, more careful personalization categories can also be formulated, such as when user is to friend point
It can be said when enjoying the photo of oneself pet:" seeing, the doggie of my family is chasing after butterfly ", now equipment can learn the crowd in user's photograph album
That in many pet dogs is the pet of user, now just can be with newly-built one " my pet dog " personalization categories.
(D), equipment can also recommend classification to segment to user automatically.By analyzing user behavior recommended user in photograph album
Image be finely divided, for example user on the internet use search engine, judge user's according to the search keyword of user
Whether point-of-interest, environment inquiry user is finely divided to image related to search keyword in equipment, and user can basis
Self-demand determines segmentation Strategy, thus completes personalization categories definition.Equipment can also have the image of classification by analysis
Carry out recommended user to be finely divided image, such as the amount of images in some classification exceedes certain amount, excessive image can be given
User makes troubles during browsing, arranging and sharing, therefore whether environment inquiry user is finely divided to this classification, uses
Family determines each classification according to own interests, completes personalization categories definition.
After user defines personalization categories, the reality of personalization categories classification can be judged according to the intensity of variation of classification
Existing mode, it comprises at least one in following method:
(A), when personalization categories are included in the range of the pre-set categories of disaggregated model, then in equipment end or high in the clouds again group
The pre-set categories in disaggregated model are closed, so as to meet the personalized definition of user.For example, the pre-set categories in disaggregated model are
" white cat ", " black cat ", " white dog ", " black dog ", " cat " and " dog ", user-defined personalization categories be " cat " and
" dog ", i.e., " the white dog " merged into " the white cat " and " black cat " in disaggregated model in " cat ", disaggregated model and " black
Dog " is merged into " dog ".For another example, it is assumed that user-defined personalization categories are " white cute pet " and " black cute pet ", then again
Pre-set categories in assembled classification model, will " white cat " and " white dog " merge into " white cute pet ", will " black cat " and
" black dog " is merged into " black cute pet ".
(B), when personalization categories are not included in the range of the pre-set categories of disaggregated model, then it can not be weighed in disaggregated model
Combination nova pre-set categories, can now be updated to disaggregated model.Can be in equipment local update disaggregated model, can also be by
High in the clouds carries out disaggregated model renewal.The image collection in the personalization categories defined by aforesaid way can be utilized, training is obtained
The initial model of personalization categories classification can be carried out to image.For example, user is in image browsing, by a width paint
The label of image is changed into " my picture " by " drawing ".Equipment detects modification of the user to image attributes, then " my picture " quilt
Be defined as personalization categories, and using by the image of modification label as the personalization categories training sample.
Because within the short time that personalization categories are defined, training sample may be fewer, the classification performance of initial model
Potentially unstable.Therefore it when a pictures are divided into new classification, can be interacted with user, for example, inquire user's this figure
Seem no to belong to personalization categories.By determining whether this image is correctly classified as personalized class with interacting for user
Not, when classifying correct, using this image as the positive sample of personalization categories, otherwise it regard this image as personalization categories
Negative sample, thus come further to collect training sample set, trained by successive ignition, lifted personalization categories model classification
Performance, and finally give stable classification performance.If the main body of image is text, Text region is carried out to image, according to
Recognition result is classified, it is possible thereby to which the text image of different themes is assigned in each classification.If model is entered by high in the clouds
Row training, then detect the difference of new personalization categories model and "current" model, and the discrepant part of selection tool is simultaneously passed through
The form for updating bag is handed down to terminal, and such as model has newly increased a branch and personalization categories are classified, then only need
The branch newly increased is transmitted, without the whole model of necessity transmission.
In order to more accurately be classified to the image in user's photograph album, it is necessary to consider that local classification engine is classified with high in the clouds
The interaction of engine.It is considered as following several situations:
(A), when user does not feed back.Because high in the clouds model is full scale model, for same piece image, locally and cloud
There may be different classification results at end.Generally, the full scale model in high in the clouds has more complicated network structure, therefore
Often it is better than local model in terms of nicety of grading.As user's setting classification results refer to high in the clouds result, high in the clouds meeting synchronism detection
Need the image of classification., it is necessary to reference to the indexs such as classification results confidence level, such as point when high in the clouds when appearance classification results are different
When class confidence level is higher than a certain threshold value, then it is assumed that piece image should be denoted as the classification results of high in the clouds classification, while local root
Local image classification result is updated according to the classification results in high in the clouds, and the information of local classification error can also be reported into high in the clouds, is used
In improvement subsequently to local model.Reporting the classification error information in high in the clouds should include by the mistake point of the image of mistake point, terminal
Class result, correct classification results (classification results in high in the clouds).High in the clouds adds the image to the class being related to according to these information
In other training set, for example, add it to by the other negative sample collection of misclassification, add it to Lou sub-category positive sample collection,
And then training pattern and lift scheme performance.
Assuming that no and high in the clouds is connected (such as because of network condition) before terminal, or user sets classification results without reference to high in the clouds
As a result, subsequently with high in the clouds set up be connected when, or user reset classification results with reference to high in the clouds result when, terminal can be according to defeated
Go out the confidence level that classification fraction judges label, wait when the confidence level is low, batch consulting can be used at User logs in high in the clouds
The correct label of these pictures of family, and then more new model.Can also design games, user is completed task in relaxing atmosphere.
(B), user can correct to the classification results of high in the clouds or terminal, when user corrects the label of wrong partial image
When, the result of mistake classification is uploaded to high in the clouds by terminal, including by the photo of mistake point, by misclassification not and user specify it is correct
Classification.When user feedback image, the image collection that high in the clouds can feed back a large amount of different users gets up to be trained, if sample
This deficiency then crawls similar image on network, and enlarged sample quantity is labeled as user and specifies classification, and Boot Model is trained.
Above-mentioned model training process can also be completed by terminal.
If the picture number collected and crawled is especially few, it is not enough to train new model, then locally according to image
In feature, the space for mapping an image to setting dimension, each image is clustered within this space, obtained in each cluster
The heart, the test image is determined according to the distance of the mapping position of test image within this space and each cluster centre belonging to class
Not.If the classification of user's corrigendum is not close with misclassification, the image close with wrong partial image feature is all designated higher
The concept of level.For example, the image of one " cat " is divided into " dog " by mistake, but positional distance of this image in feature space
The cluster centre of " cat " closer to, it is impossible to " dog " is belonged to by range estimation this image, then the classification of image is lifted one
Level, is designated as " pet ".
The a collection of picture when user feedback, may the inside contain maloperation picture, the image of such as one " cat " is correct
Be categorized as " cat ", and user error be marked as " dog ", this kind operation be exactly a kind of maloperation.We can be anti-to these
Feed row judges (when label especially higher to confidence level carries out error feedback).A behaviour by mistake can be built on backstage
Make detection model dedicated for the judgement of this kind of picture, for example we obtain the model by way of being interacted with user
Training sample, when an image classification confidence be higher than a certain threshold value, and user by the sample labeling for other classifications when,
Change whether is needed to user's query, if user's selection is not changed, the image can be used as training and judge maloperation model
Training sample.The speed of the model can be relatively slow, can be dedicated as wrong picture conditioning ring section.When maloperation detection model
When judging user misoperation, user can be pointed out, or the picture of maloperation is excluded outside training sample.
(C), when local, high in the clouds image is variant.When local photo is not uploaded, it is same that terminal can receive that high in the clouds sends
Step is upgraded demand.During photo upload, often transmission, which finishes a photo, can all carry out real-time sort operation.In order to reduce band
Wide occupancy, can upload parts of images, and uploading image can be selected according to the classification confidence of terminal, for example, work as image
Classification confidence be less than a certain threshold value when, then it is assumed that the image classification result it is unreliable, it is necessary to be uploaded to high in the clouds carry out again
Classification.Classification results with it is local variant when, the local classification results of synchronized update.
(4) image transmitting and emphasis, based on Graphical User interest region are shown.
When equipment detects view data transmission request, transmission network type and transmission quantity are judged, according to transmission network
The type and transmission quantity of network use different transmission means.Transmission means includes the image of transmission full figure compression, transport part partial pressure
The image of contracting, the image of transmission without compression, etc..
In Partial shrinkage image model, the compression of low compression ratio is carried out for user-interested region, so as to ensure the area
The definition in domain;Compression for using high compression ratio outside user-interested region, so that the electric quantity consumption saved in transmitting procedure
And bandwidth resources.Figure 10 is the flow chart that different transmission mode is selected according to embodiment of the present invention.Device A is asked to equipment B
Image, equipment B such as being set the network bandwidth, network quality or user, determines transmission mode by checking indices.
In some cases, equipment B asks additional information to device A, such as the charge condition of device A, so as to assist in transmission mould
Formula.Transmission mode can include Three models, 1) high-quality transmission pattern, do not do any compression processing, 2 to image for example) in
Etc. mass transport pattern, for example, low compression ratio compression is carried out to area-of-interest, high compression ratio compression, 3 is carried out to background) low-quality
Transmission mode is measured, the compression of high compression ratio is carried out to full figure.Finally, equipment B by image transmitting to device A.In some cases,
Equipment B can also actively send the image to device A.
In one embodiment, image is managed in step 102 includes:Based on the user in image transmitting parameter and image
Interest region, the figure for handling and transmitting after compression is compressed to image;And/or, the reception server, base station or user equipment hair
The image sent, described image is that the image after processing is compressed based on image transmitting parameter and user-interested region.Specifically,
Image transmitting parameter includes:Amount of images to be transmitted, transmission network species and transmission network quality, etc..
To image be compressed processing include it is following at least one:
(A) it is emerging to the user in image to be transmitted, when image transmitting parameter meets user-interested region not contractive condition
Image-region outside interesting region is compressed processing, and the user-interested region in image to be transmitted is handled without compression.
Such as, when based on the interval threshold value of predetermined amount of images to be transmitted, determine that amount of images to be transmitted is in predetermined
Suitable interval when, you can judgement meet user-interested region not contractive condition.Now, to the user interest in image to be transmitted
Image-region outside region is compressed processing, and the user-interested region in image to be transmitted is handled without compression.
(B), when image transmitting parameter meets difference contractive condition, outside the user-interested region in image to be transmitted
Image-region carry out the compression with the first compression ratio and handle, carry out having the to the user-interested region in image to be transmitted
The compression processing of two compression ratios, wherein second compression ratio is less than the first compression ratio.
Such as, when it is determined that transmission network species is mobile radio networks, you can judgement meets difference contractive condition.
Now, processing is all compressed to each image-region in image to be transmitted, and to the image outside user-interested region
Region carries out the compression with the first compression ratio and handled, and carries out having the second compression to the user-interested region in image to be transmitted
The compression processing of ratio, wherein the second compression ratio is less than the first compression ratio.
(C), when image transmitting parameter meets indifference contractive condition, to the user-interested region in image to be transmitted it
Outer image-region and the user-interested region in image to be transmitted, carry out the compression processing of identical compression ratio.
Such as, when based on predetermined transmission network quality threshold, when determining that transmission network is second-rate, you can judge full
Sufficient indifference contractive condition.Now, for the image-region outside the user-interested region in image to be transmitted and figure to be transmitted
User-interested region as in, carries out the compression processing of identical compression ratio.
(D), when image transmitting parameter meets not contractive condition, compression processing is not performed to image to be transmitted.
Such as, when based on predetermined transmission network quality threshold, when determining that transmission network quality is good, you can judge full
Sufficient not contractive condition, does not perform compression processing now to image to be transmitted.
(E), when image transmitting parameter meets multiple contractive condition, compression processing is performed to image to be transmitted and one or more
Secondary transmission process.
Such as, when based on predetermined transmission network quality threshold, when determining transmission network poor quality, you can judge full
The multiple contractive condition of foot.Now, compression processing and transmission process one or more times are performed to image to be transmitted.
In one embodiment, this method include it is following at least one:
When amount of images to be transmitted is less than the first predetermined threshold value, process decision chart meets not compressor bar as configured transmission
Part;When amount of images to be transmitted is more than or equal to the first threshold value and is less than the second predetermined threshold value, image transmitting ginseng is judged
Number meets user-interested region contractive condition, wherein the second threshold value is more than the first threshold value;When amount of images to be transmitted is more than
During equal to the second threshold value, process decision chart meets user-interested region indifference contractive condition as configured transmission;When transmission network matter
When the assessed value of amount is less than three predetermined threshold value, process decision chart is as configured transmission is met multiple contractive condition;Work as transmission network
When the assessed value of quality is optionally greater than the 3rd threshold value and is less than four predetermined threshold value, process decision chart is poor as configured transmission satisfaction
Other contractive condition, wherein the 4th threshold value is more than the 3rd threshold value;When transmission network species is free nets (such as WiFi network)
When, process decision chart meets not contractive condition as configured transmission;When transmission network species is carrier network, according to rate adjusting pressure
Contracting mode, rate is higher, and the compression ratio of image is higher.
In fact, the aggregative weighted that the present invention is also based on multiple images configured transmission handles to judge whether in satisfaction
Arbitrary contractive condition is stated, embodiment of the present invention is repeated no more to this.
It can be seen that, distinctive compression processing, embodiment of the present invention are carried out to image to be transmitted based on user-interested region
The electric quantity consumption and bandwidth resources that can be saved in transmitting procedure, and can ensure that user-interested region can be clear by user
Check.
In one embodiment, in step 102 manage image include it is following at least one:
(A), when display screen is less than preliminary dimension, the classification image or classification word of user-interested region are shown.
(B), when display screen is less than preliminary dimension and the selection operation based on user chooses the classification of user-interested region
When, the image of the category, and the handover operation based on user are shown, switching shows other images of the category.
(C), when display screen is less than preliminary dimension, the number based on user-interested region shows described image.Figure 20
For according to embodiment of the present invention, to the fast browsing flow chart of tree structure on small screen device.Smaller screen device request one
The attribute list of image, then query image.When including at least one area-of-interest in the attribute list of image, to sense
Interest region is ranked up, and sortord may be referred to the narration of fast browsing and retrieving portion.Then the row of display on screen
The area-of-interest of sequence first, if equipment detects the operation of user's switching viewing area, shows next region of interest
Domain.If there is no area-of-interest in the attribute list of image, the middle body of display image.
Wherein, when display screen is less than preliminary dimension, the number display image based on user-interested region includes following
In at least one:
(C1), when in image do not include user-interested region when, with thumbnail mode display image or by image be condensed to
The adaptable size of display screen is shown.
(C2), when including a user-interested region in image, user-interested region is shown.
(C3), when image includes multiple user-interested regions, alternating shows each user-interested region in the image,
Or, the first user-interested region in the image is shown, based on user's handover operation, switching is shown in the image except first uses
User-interested region outside the interest region of family.
It can be seen that, when the size of image shows equipment is smaller, embodiment of the present invention shows user interest area by emphasis
Domain, can improve the displaying efficiency of user-interested region.
(5), quickly sharing based on Graphical User interest region.
The relevance that relevance of the equipment based on user-interested region is set up between each width image, method for building up, which includes detection, schemes
The contact person that occurs as in, detection similar semantic content, same geographic location, special time period, etc..Association between image
Can be same correspondents, from same event, comprising same semantic concept, etc..
In thumbnail browser interface, having related photo can be marked by certain mode, and be supplied to user one
The prompting that key is shared.Figure 11 is that, according to embodiment of the present invention, user actively initiates to share the flow chart of image.Equipment is detected
One image collection is chosen by user, and equipment is according to the record of sharing of user, and selected image and the figure shared
The degree of correlation of picture, it is determined that related contact person.Equipment judges that image collection is shared with individual or is shared with group by user's selection.
When user's selection is shared with group, equipment sets up a design group, and image collection is shared with into the group.When user's choosing
Select when being shared with individual, image collection is shared with individual by equipment by repeatedly sending image collection.Figure 12 A- Figure 12 B are root
According to embodiment of the present invention, user carries out the flow chart of images share when using social software.Exist when equipment detects user
During using social software, such as timely bitcom, equipment shares record according to user in the social software, from photograph album
An image collection being made up of the image do not shared is chosen, and prompts the user whether to share the image collection.Work as equipment
After the confirmation for detecting user, share image collection.In addition, equipment can also be by analyzing user in social software
Word is inputted, to determine image collection to be shared, as shown in Figure 12 B.
In some embodiments,, will according to the contact person included in image when monitoring of equipment shares action to user
Associated picture is shared with each contact person.Or, related contact person is set up into group chatting automatically, by images share to group
In group.In MSN, the input of user can be automatically analyzed, judges whether user has the wish for sharing image, such as
Fruit has the wish shared then to analyze the content to be shared of user, and relevant range is intercepted from image automatically is supplied to user to carry out
Selection is shared.
In one embodiment, image is managed in step 102 includes:It is determined that object to be shared;To object to be shared point
Enjoy image;And/or, the chat content based on chatting object or with chatting object, it is determined that image to be shared, shares to chatting object
The image to be shared.Embodiment of the present invention can detect the relevance between user-interested region, and based on testing result
The relevance set up between image, it is determined that sharing object or image to be shared, then shares the image with relevance.Preferably,
Relevance between user-interested region includes:The classification associated property of user-interested region;The association in time of user-interested region
Property;The position relevance of user-interested region;Personage's relevance of user-interested region, etc..
Specifically, the user-interested region based on image shares image, including it is following at least one:
(1), the user-interested region based on image determines group of contacts to be shared;Group is carried out to image based on user
The operation that component is enjoyed, the group of contacts to be shared is shared with by image by group's mode.
(2), the user-interested region based on image determines contact person to be shared;Image is individually divided based on user
The operation enjoyed, each contact person to be shared is sent respectively to by image, wherein, included in the image for being shared with each contact person
User-interested region corresponding with the contact person.
(3), when the chat sentence of user and chatting object and the corresponding user-interested region of image, using image as
Share Candidate Recommendation to user;
(4), when chatting object is corresponding with the user-interested region in image, using image as share Candidate Recommendation to
User.
In one embodiment, after image is shared, rower is entered according to the contact person shared to the image shared
Note.
It can be seen that, user-interested region of the embodiment of the present invention based on image shares image, can be easily from big spirogram
The image to be shared is oriented as in, and can advantageously be shared in a variety of application environments.
(6), the image method for concentrating based on user-interested region.
Such as, the image method for concentrating based on user-interested region includes:Selected mode from image to word.
In this fashion, the image in a special time period is polymerize and separated first, in analysis image
Content, aids in camera site and the time of image, the image from same amount of time, same event is aggregating into composition
One image collection, and the content included according to image collection produces passage description, while automatically generating image tile.
During producing image tile, picture position and puzzle template are adjusted automatically according to image-region, key area is included
In image tile, the artwork that can be linked back in photograph album by image tile.
In one embodiment, image is managed in step 102 includes:Selected image is selected based on user-interested region;
Picture mosaic is generated based on selected image, wherein highlighting the user-interested region of each selected image in picture mosaic.Implement this
In mode, selected image can be shown automatically by system.
In one embodiment, further comprise:Selection of the user to user-interested region in picture mosaic is detected to grasp
Make;The selected image of user-interested region of the display comprising selection.In this embodiment, can based on user selection grasp
Make the selected image of display.
For another example, the image method for concentrating based on user-interested region includes:Selected mode from word to image.
In this embodiment, passage is inputted by user first, then system extracts key from this section of word
Word, associated picture is chosen in image set, image is cut out if necessary, by these associated pictures or image-region
The word paragraph of user is inserted as illustration.
In one embodiment, image is managed in step 102 includes:
Detect user's input text;Retrieval includes the image of the user-interested region associated with text;By what is retrieved
The image of user-interested region is inserted into user's input text.
(7), the image conversion method of image content-based.
System can be analyzed the image in photograph album, according to the outward appearance of image and time, and to the word in image
Carry out natural language processing.
Such as, at thumbnail interface, equipment marks the character image from same source by certain mode, and
One is provided the user with to recommend to merge button.Enter image transition interface when user clicks on when detecting, at this
User can add or delete image in interface, and the image after most adjusting at last produces a text.
In one embodiment, this method also includes:When judging that multiple images come from identical document, automatically by institute
State multiple images and be polymerized to document, or the multiple image is polymerized to by document based on user's trigger action.
It can be seen that, embodiment of the present invention can be polymerize to image and generate document.
(8), the intelligence of image content-based, which is deleted, recommends.
Such as, the content of image is analyzed based on user-interested region, visual similarity, content phase according to image
Like property, picture quality, comprising factors such as contents, by vision is similar, content is similar, picture quality is low and not comprising meaningful thing
The image recommendation of body is deleted to user.Picture quality includes aesthetic measure, can be according to user-interested region in the picture
Position, the relation of each user-interested region judges the aesthetics of image.
Interface is being deleted, the image deleted will recommended according to packet and be shown to user, during display, with a certain figure
Can be first image, top-quality image etc. as on the basis of, display and the difference of benchmark image on other images.
In one embodiment, in step 102 manage image include it is following at least one:
(A), the classification comparative result based on the user-interested region in different images, is automatically deleted image or recommends to delete
Image.
(B) the semantic information including degree of respective image, is determined based on the user-interested region in different images, based on difference
The comparative result of image, semantic information including degree is automatically deleted image or recommends to delete image.
(C), respective image is scored based on the relative position between respective user-interested region in different images, and
Image is automatically deleted based on appraisal result or recommends to delete image.
(D) absolute position, based at least one of different images user-interested region is scored respective image, and
Image is automatically deleted based on appraisal result or recommends to delete image.
It can be seen that, embodiment of the present invention, which is based on user-interested region, realizes intelligence deletion recommendation, can save storage empty
Between and improve image management efficiency.
The exemplary way to manage described based on user-interested region to image of the above.Those skilled in the art can anticipate
Know, this description is only exemplary, is not intended to limit the present invention the protection domain of embodiment.
Below, with reference to embodiment, the concrete example that image is managed based on user-interested region is illustrated.
Embodiment 1:Fast browsing at picture browsing interface
Step 1:Can favored area position in device prompts user images
Herein, the relative position of equipment detection user's finger or writing pencil on screen, and by the position and image
The position of user-interested region is compared.If two positions are overlapping, remind user's user-interested region optional.
The mode for reminding user can be can favored area be highlighted, frame choosing or equipment vibrations, etc. on image.
Figure 13 A- Figure 13 G are the fast browsing schematic diagram at picture browsing interface according to embodiment of the present invention.
As shown in FIG. 13A, when the finger that equipment detects user falls in automobile position, automobile region is high
It is bright, point out user's automobile optional.
It should be noted that step 1 is optional step.In actual applications, possible all items region be all can
Choosing, user can directly select appropriate region according to the type of article.For example, storing the photograph of an automobile in equipment
Piece, automobile region is exactly optional, and whether equipment need not point out the user automobile region optional.
Step 2:The operation of equipment detection user in the picture
Equipment detect user each can be in favored area operation, forms of these operations can include clicking, double-clicking, drawing
Dynamic, circle choosing, etc..Each operation format can correspond to a specific search implication respectively, and search implication can be included " must
Choosing ", " optional ", " not selecting ", " only selecting ", etc..
As shown in Figure 13 B, Figure 13 F and Figure 13 G, correspondence " optional " is clicked;Double-click correspondence " essential ";Stroke correspondence is " no
Choosing ";Circle choosing correspondence " only selecting ".The corresponding retrieval implication of operation is properly termed as search criteria, and these search criterias can be system
It is predetermined or user-defined.
In addition to it can carry out physical operations on screen, can also by phonetic entry to each can favored area grasp
Make.For example, user wishes to choose automobile by voice, then say " automobile ", user equipment detects user speech input automobile,
Then determine to need to operate automobile.When user speech input corresponds to " essential ", then to detect user speech defeated for equipment
Enter it is essential, it is determined that need to user return must include the operation of automobile.
User can also combine physical operations and voice operating, such as by physical operations chosen area, be determined by voice
Operation format.For example, when user wishes to check the image that must include automobile, user clicks on automobile region on image, so
Phonetic entry is essential afterwards, and equipment detects user click automobile region and phonetic entry is essential, it is determined that need to return to user
The image of automobile must be included.
After equipment detects the operation of user, while show that user operates with some form on screen, and
User is facilitated to carry out other operations.As shown in fig. 13 c, the content being selected using text importing, and different face can be used
Color table shows different operating, and user can also click on the minus sign revocation associative operation on icon.
For example, user if it is desired to search only include automobile an image, then can select automobile in an image centre circle.This
When, equipment detects the circle selection operation in user automobile region in the picture, and automobile is only included so that it is determined that needing to provide the user
Image.
And for example, user wishes to search the image while comprising automobile and aircraft, then can be in the vehicle area in an image
Domain and aircraft region carry out double click operation.Now, equipment detects the double click operation in automobile region of the user in an image
And the double click operation of aircraft region, so that it is determined that needing to provide the user the image while comprising automobile and aircraft.
And for example, user wishes to search the image comprising automobile or aircraft, then automobile region can be clicked in an image
And aircraft region.Now, equipment detects single-click operation of the user in the picture to automobile region and aircraft region, so that it is determined that
Need to provide the user the image comprising automobile or aircraft.
And for example, user wishes that the image that finds does not include automobile, then can be drawn in an image in automobile region
Dynamic operation.Now, equipment detects the paddling operation in the automobile region of user in the picture, so that it is determined that needing to provide the user
Image not comprising automobile.
Except the selection operation of above-mentioned different modes, user can also carry out hand-written operation in image-region.Hand-written operation
It can correspond to a kind of specific search implication, one of " essential " described above, " optional ", " not selecting ", " only selecting ".
For example, corresponding to " essential " as hand-written, when user wishes to search by an image containing automobile but without aircraft
Rope simultaneously image comprising automobile and aircraft when, user can image the hand-written aircraft of arbitrary region.Now, device analysis is used
The handwritten content at family is " aircraft ", it is determined that needs provide the user the image comprising automobile and aircraft.
Step 3:Equipment searches for image corresponding with user's selection operation
After equipment detects the operation of user, search criteria is produced according to user's operation, set using the search criteria
In standby or high in the clouds search associated picture, and the thumbnail of these images is shown to user on screen, user is by clicking on this
The icon of a little thumbnails can be switched to corresponding image and be watched.Alternatively it is also possible to which image will be searched out on screen
Full graphics be shown to user.
Equipment is when showing search result, and the similarity for the user-interested region that can be used based on image and search is carried out
Sequence.The high image of similarity is preferably in display made above, and the low image of similarity is shown preferably after.
For example, equipment, which detects user, chooses the car in image as term.In the search result of equipment feedback,
The image of car, which comes, above to be shown, including the image of bus comes the image of car followed by display.
And for example, equipment detects user and chooses the personage in image as term.In the search result of equipment feedback, with
Personage ID identical character images selected by user, which come, above to be shown, then shows similar to personage's appearance selected by user or clothing
Image, the image of other personages is included in last display image.
As shown in FIG. 13A, equipment, which is detected, has automobile in image, highlight automobile region, points out user should
Region is optional.As shown in Figure 13 B, when equipment detects user while after having double-clicked aircraft and automobile in same image, aircraft
It is " essential " with automobile, then equipment determines that user wishes to browse the image while comprising aircraft and automobile, therefore, what equipment was shown
All containing aircraft and automobile in alternative image, as shown in fig. 13 c.By the embodiment, when simultaneously user wishes that search is wrapped
During image containing aircraft and automobile, it is only necessary to find an image for having aircraft and automobile, you can fast to be carried out by the image
Speed search, obtains all images comprising aircraft and automobile, so as to improve browsing and retrieval rate for image.
Equipment, which is detected, has automobile in image, highlight automobile region, points out the user region optional.As schemed
Shown in 13D, automobile has been double-clicked and after hand-written aircraft when equipment detects user, aircraft and automobile are " essential ", then equipment is true
Determine user and wish to browse the image while comprising aircraft and automobile, therefore, also all containing winged in the alternative image that equipment is shown
Machine and automobile, i.e. double-click are consistent with hand-written effect, are all " essential ", and other guide is not repelled in this kind operation, the figure such as returned
As people can also be included.
, may be larger due to image volume, it is impossible to find when user wishes image of the search simultaneously comprising aircraft and automobile
One while the image comprising aircraft and automobile.Pass through the embodiment, it is only necessary to find an image for having automobile, you can with
Fast search is carried out by the handwritten content of the image and user, all images comprising aircraft and automobile are obtained, so as to improve
The browsing and retrieval rate of image.
As shown in figure 13e, after equipment, which detects aircraft, to be selected by circle, it is " only selecting " to determine aircraft, and this kind operation repels it
His content, then equipment determine that user wishes to browse the image for only include aircraft, therefore, the alternative image that equipment show only includes winged
Machine.By present embodiment, when user wishes to browse the image for only including aircraft, any figure for including aircraft can be passed through
As carrying out fast search, so as to improve browsing and retrieval rate for image.
As shown in Figure 13 F, after equipment detects user click aircraft and automobile, aircraft and automobile are " optional ", then
Equipment determines that user wishes to browse the image comprising aircraft or automobile, therefore, may be included in the alternative image that equipment is shown
There are aircraft or automobile, both can occur simultaneously, can also individually occur, this kind operation is not repelled to other guide.By this
Embodiment, can be by any simultaneously comprising aircraft or vapour when user wishes to browse the image comprising aircraft or automobile
The image of car carries out fast search, so as to improve browsing and retrieval rate for image.
It is when equipment, which detects user, crosses out people, then artificial " not selecting " as shown in Figure 13 G, the alternative image that equipment is shown
In absolutely not include people.These operations can be mutually combined, and such as equipment detects user click aircraft, has double-clicked automobile,
People is crossed out, then aircraft is " optional ", automobile is " essential ", artificially " not selecting ", may be included in the alternative image that equipment is shown
Aircraft, includes automobile, absolutely not comprising people certainly.By present embodiment, when user wishes to browse the image comprising something,
Fast search can be carried out by any image comprising the thing, so as to improve browsing and retrieval rate for image.
In some cases, the operation that the desired operation of user and equipment are identified may be inconsistent.For example, user is originally
Screen is double-clicked, equipment is identified as single-click operation.In order to avoid producing this inconsistent situation, equipment is identifying the behaviour of user
After work, different operations can be shown by different modes.
As shown in Figure 13 A- Figure 13 G, equipment is after the double click operation of aircraft in identifying user to image, above screen
Show aircraft, it is possible to represent that the aircraft is essential by predetermined color.For example, representing that aircraft is essential by red.Equipment
After the single-click operation of automobile in identifying user to image, the display automobile above screen, it is possible to pass through predetermined color table
It is optional to show the automobile.For example, representing that the automobile is optional by green.By the embodiment, user is according to color
It is assured that whether the content of equipment identification is accurate, if there is mistake, can in time adjust, improve what is browsed and search for
Efficiency.
Embodiment 2:Fast browsing based on many images
User may want to search the image while including people and dog.But, when image volume is larger, user is not easy to look for
Include the image of people and dog simultaneously to one.Therefore, embodiment of the present invention additionally provides the selecting object in different images and entered
The method of row fast browsing.
Figure 14 A- Figure 14 C are the fast browsing schematic diagram based on multiple image according to embodiment of the present invention.
Step 1:Equipment detects operation of the user in piece image
As tdescribed in embodiment 1, operation of the equipment detection user in piece image.Equipment detects user in the first width
Choose one or more regions in image, and detect that the operation of user determines search condition, and by the breviary of the image retrieved
Figure is shown on device screen.
As shown in Figure 14 A, the image that the desired first width image setting of user is retrieved must include people, then user is first
The region of people is double-clicked on width image, when equipment detects the region that user has double-clicked people in piece image, it is determined that need
The image of people must be included by being returned to user.
Step 2:Equipment searches for image corresponding with user's selection operation
After equipment detects user in the operation of piece image, search criteria is produced according to user's operation, this is utilized
Search criteria is in a device or associated picture is searched in high in the clouds, and the thumbnail of these images is shown into user on screen.
As shown in Figure 14 A, when equipment detects the region that user has double-clicked people in piece image, it is determined that need to give
User, which returns, must include the image of people.
The step 2 is optional, directly can also jump to step 3 from step 1.
Step 3:The operation of the second width image is chosen in user equipment detection user's activation
The operation of the second width image is chosen in equipment detection user's activation, opens the thumbnail mode of photograph album, is chosen for user
Second width image.The operation that the second width image is chosen in user's activation can be gesture, writing pencil operation or voice operating, etc..
For example, user presses the button on writing pencil, the button that equipment detects writing pencil is pressed, then popup menu,
One of which in menu is chooses other images, and equipment detection user, which clicks on, chooses other image buttons, or directly opens phase
Volume thumbnail mode, the second width image is chosen for user.
As shown in Figure 14 A, equipment detects the button of writing pencil and is pressed, then the menu of other images is chosen in ejection, and
Detect user and click on the button for choosing other images, open photograph album thumbnail mode, the second width image is chosen for user.
For another example, user's long-press image, equipment detects the long-press operation of user.One in equipment popup menu, menu
To choose other images, equipment detection user, which clicks on, chooses other image buttons, or directly opens photograph album thumbnail mode, for
Choose the second width image in family.
For another example, equipment shown under picture browsing pattern choose the second width image button, and detect user press this by
Button.When detecting user and pressing the button, image thumbnails pattern is ejected, the second width image is chosen for user.
For another example, user speech inputs certain voice command, such as " opening photograph album ", the voice is inputted when equipment detects user
During order, photograph album thumbnail mode is opened, second image is chosen for user.
Step 4:Equipment detects operation of the user on the second width image
User chooses the image for wanting to be operated, and equipment detection user clicks on the image for wanting to be operated, in screen
Upper display diagram picture.
User is operated on the second width image, and equipment detects operation of the user in the second width image, such as embodiment party
Described in formula 1, equipment detection user chooses one or more regions in the second width image, and detects that the operation of user determines inspection
Rope condition, and the thumbnail of the image retrieved is included on device screen.
As shown in Figure 14B, user clicks on the image containing dog, and equipment detects user and clicks on the image containing dog, then exists
The image for containing dog is shown on screen.The image that user wants to retrieve by the second width image setting must include dog, then
User double-clicks the region of dog on the second width image.When equipment detects the region that user has double-clicked dog in the second width image,
Then determine to need to return to user that the image of people and dog must be included.
Step 5:Equipment searches for image corresponding with user's selection operation
After equipment detects user in the operation of piece image and the second width image, according to piece image and second
The combination of the operation of width image produces search criteria, using the search criteria in a device or high in the clouds search associated picture, and
The thumbnail of these images is shown to user on screen.
As shown in Figure 14 C, equipment detects user and people has been double-clicked in piece image, is double-clicked in the second width image
Dog, then equipment determine to need to user return must the image comprising people and dog simultaneously, and show on screen these images
Thumbnail.
By present embodiment, user can quickly find needs based on the user-interested region in multiple images
Image, so as to improve the lookup speed of image.
Embodiment 3:Video tour based on image-region
Step 1:The operation of equipment detection user in the picture
The embodiment that equipment detection user operates in the picture may refer to embodiment 1 and embodiment 2, herein not
Repeat again.
Equipment detection user chooses one or more user-interested regions in the picture, and detects user in user interest area
Operation on domain determines search condition, and the thumbnail of the frame of video searched is included on device screen.
Figure 15 A- Figure 15 C are the fast browsing schematic diagram in video according to embodiment of the present invention.
As shown in Figure 15 A- Figure 15 C, the image that the desired setting of user is searched must include automobile, then user is on image
The region of automobile is double-clicked, when equipment detects the region that user has double-clicked automobile in the picture, it is determined that need to return to user
The frame of video of automobile must be included.
Except in the picture can be in addition to favored area operates to each, equipment can also be operated to frame of video.When setting
After the standby video detected in playing is suspended, the pattern scanned for by user-interested region is opened so that user can be
Each user-interested region is operated in the frame of video being suspended, when equipment, to detect user emerging to the user in frame of video
After interesting region is operated, search condition is determined.
For example, during device plays video, equipment detects user and clicks pause button, and detect user and double-clicked and regard
Automobile in frequency frame, equipment determines that automobile must be included in the image or frame of video for return to user.
Step 2:Equipment searches for frame of video corresponding with user's selection operation
After equipment detects operation of the user in image or frame of video, search criteria is produced according to the operation of user,
Using the search criteria in a device or high in the clouds search associated picture or frame of video.
Search for image is similar with embodiment 1 and embodiment 2, and embodiment of the present invention will not be repeated here.
It is described below and searches for corresponding frame of video how in video.
For each video, shot segmentation is carried out to video first, the method for shot segmentation can be by video solution
I frames are detected during code, the starting of a camera lens is used as using I frames.Can also be for example sharp according to the vision difference between each frame in video
With frame is poor, color histogram gap or extract more complicated visual signature (feature of hand-designed or the spy based on study
Levy) difference, by Video segmentation into the camera lens containing different scenes.
For each camera lens, object detection is proceeded by from the first two field picture, judges whether frame of video meets search criteria,
The thumbnail of first frame of video for meeting search criteria is included on screen if meeting.
As shown in fig. 15, equipment detects the region that user has double-clicked automobile, if Video segmentation is dry system lens by equipment,
And automobile appearance has been detected whether in the frame of video of each camera lens, first frame of video for including automobile is returned if occurring,
If having the frame of video comprising automobile in multiple camera lenses, then when showing thumbnail, while by first in these camera lenses
The thumbnail of frame of video comprising automobile is shown.
As shown in fig. 15b, user's thumbnail is pointed out to represent one section of video segment by way of icon on thumbnail.
Step 3:Play the video lens for meeting search criteria
User then clicks on the thumbnail containing video icon if it is intended to viewing meets the video segment of search criteria.When
When equipment detects user and clicks the thumbnail containing video icon, video player is jumped to, and from meeting user's search
The frame of video of condition is commenced play out, and untill being played to the frame of video appearance for not meeting user's search condition, user can select to regard
Frequency continues to play or returns to photograph album and continues to browse other video segments or image.
As shown in figure 15 c, user clicks on the frame of video thumbnail for including automobile, and equipment, which detects user and clicked on, to be included
After the thumbnail of the frame of video of automobile, thus frame commences play out video.
When user wishes to search a certain frame in video, if user knows the content included in the frame, it can pass through
Present embodiment is quickly searched.
Embodiment 4:Fast browsing is carried out under camera preview pattern
Step 1:Equipment detects operation of the user under camera preview mode
User opens camera and enters camera preview mode, and opens picture search function.Equipment detects camera
It is opened and function of search is opened, equipment starts to catch the image inputted by camera, and in one or more figure of input
User-interested region is detected as in.Equipment detects operation of the user in these user-interested regions, the shape of these operations simultaneously
Formula is similar with acting on embodiment 1, embodiment 2 and embodiment 3.
Equipment detection user chooses one or more user-interested regions in the picture, and detects user in user interest area
Operation on domain determines search condition.
Figure 16 is the fast browsing schematic diagram under camera preview interface according to embodiment of the present invention.
As shown in figure 16, user has double-clicked first man under preview mode under first scene.Equipment is detected
First man is double-clicked under first scene, it is determined that first man must be included in the image searched out.Similar, user
Second people is double-clicked under second scenario, equipment detects lower second people of second scenario and double-clicked, it is determined that searching
First man and second people must be included in the image of rope.User has double-clicked the 3rd people, equipment inspection under the 3rd scene
The 3rd people under the 3rd scene is measured to be double-clicked, it is determined that in the image of search must comprising first man, second
People and the 3rd people.Equipment can include the thumbnail of the image for meeting search condition searched on screen.
The function of search under camera preview mode can be opened by a variety of methods.
For example, under camera preview mode, a button is set on a user interface, equipment is by detecting that user clicks on
The button opens the function of search under camera preview mode.When equipment detect user in image can favored area grasp
Search condition is determined after work.
For another example, under camera preview mode, a Menu key is set on a user interface, picture search function will be opened
Button be arranged in menu, equipment by detect user click on open camera preview mode under function of search.
When equipment detect user in image can favored area operate after determine search condition.
For another example, under camera preview mode, equipment detection user clicks the button on writing pencil, then popup menu,
The button for opening function of search is set in a menu, and equipment is by detecting that user is clicked under unlatching camera preview mode
Function of search.When equipment detect user in image can favored area operate after determine search condition.
For another example, equipment default search function open, when equipment detect user in image can favored area operate
Afterwards, search condition is directly determined.
Step 2:Equipment searches for image corresponding with user's selection operation or frame of video
After equipment detects operation of the user under camera preview mode, the corresponding search criteria of generation, and according to
The criterion is in equipment or the high in the clouds search image corresponding with criterion or frame of video.Wherein, the search criteria and the phase of embodiment 1
Together, embodiment of the present invention is repeated no more.
In the present embodiment, user can choose term by preview mode, so as to be quickly found out corresponding image
Or frame of video.
Embodiment 5:Personalized photograph album tree structure
Step 1:Equipment is polymerize and separated to the image of user
Equipment is polymerize and separated to the image of user according to the semanteme and vision similarity of tag along sort, by semanteme
Or the similar image of vision is polymerize, semantic or vision difference is separated than larger image.For there is the figure of semanteme
Picture, carries out polymerization separation according to semantic concept, for example, is polymerize landscape image, and the image of landscape and the vehicles is carried out
Separation.For without semantic image, polymerization separation is carried out to image according to visual information, such as it is all red figure by dominant hue
As being polymerize, mass-tone is adjusted to red and mass-tone is adjusted to blue image and separated.
For the polymerization and separation of image, following manner can be included:
Mode (1), a kind of mode are the analyses for full figure.For example, entire image is classified, or statistics view picture
The distribution of color of image.Classification identical image is polymerize, different images of classifying are separated.This kind of method is applied to
The situation of any certain objects is not included in image.
Mode (2), another mode are that the user-interested region in image is analyzed.For with class label
User-interested region, can be polymerize and separated according to the semanteme of class label, it is emerging for the user of identical category label
Interesting region is polymerize, and the user-interested region for different labels is separated;For emerging without the user of class label
Interesting region, is mainly polymerize and is separated according to visual information.Color histogram is for example extracted in user-interested region, will be straight
The close user-interested region of square map distance is polymerize, and Histogram distance user farther out's interest region is separated.Should
The mode of kind includes the situation of certain objects suitable for image, and it is multiple to use this kind of mode piece image to be aggregated to
In classification.
Mode (1) and mode (2) can be combined, for example, be all landscape image, by the ocean imagery that blueness is main body
A class is polymerized to, green is polymerized to Equations of The Second Kind for the ocean imagery of main body.For another example, it is all automobile image, by different colours
Automobile is polymerized to multiclass.
Figure 17 is the first demonstrative structure figure of personalized tree structure according to embodiment of the present invention.Such as Figure 17 institutes
Show, car is grouped together in together, and bus is grouped together in together.
Step 2:Equipment sets up tree structure to the image after polymerization separation
User-interested region or image for including class label, set up tree-like by the semantic information of class label
Structure.The tree structure can be the undefined structure of line, for example, the vehicles comprising automobile, bicycle, motorcycle, aircraft,
Ship, can also continue to be subdivided into car, bus, truck etc. for automobile.
For not comprising the user-interested region or image for having class label, calculating the flat of the image condensed together first
Equal visual information, for example, calculate color histogram to the every image condensed together, then histogram be averaging, made
For the visual tag of the cohesive image.Visual tag is sought to all polymerization set not comprising class label, then computation vision
The distance of label, by close visual tag it is abstract be a higher level visual tag.Such as when being polymerize and being separated,
Main body is that the image of blueness is first polymerization set, and main body is that yellow image is second polymerization set, and main body is red
Image is the 3rd polymerization set.The distance of the visual tag of these three polymerization set is calculated, because yellow includes blue letter
Breath, so yellow visual tag is conceptualized as a class with blue visual tag.
Step 3:Equipment is modified to tree structure
The amount of images at all levels is counted first, when amount of images is more than a predetermined threshold value, is then further opened
Put down the label of a level.
For example it is assumed that the amount of images threshold value in default level is 20.There are 50 images under landscape label, then enter one
The labels such as the open seabeach of step, mountain peak, desert.
Equipment can also force a certain classification to be shown according to manually setting for user.For example it is assumed that default layer
Amount of images threshold value in secondary is 20, there is 15 images under landscape label, and equipment detects user and sets individually display manually
The image at seabeach, then seabeach label be disclosed, other landscape labels are integrally disclosed as a class.
For different user, because the image distribution in each user equipment is different, then equipment is disclosed to the tree-like of user
Structure is also different.
Figure 18 is the second demonstrative structure figure of personalized tree structure according to embodiment of the present invention.
In fig. 17, under the vehicles label of user 1, bicycle, motor vehicle, aircraft and ship four are subdivided into
Class, wherein motor vehicle are divided into car, bus and tramcar again, and car and bus can be carried out carefully according to color again
Point.
However, in figure 18, under the vehicles label of user 2, only including the car of different colours.
Embodiment 6:The classification of the definition of personalized image classification
Embodiment 6 can realize that the personalization categories of image in photograph album are defined and realized to individual character according to the operation of user
Change the classification of classification.
Step 1:Equipment judges whether the label of image should be modified.
Equipment judges whether user modifies in the attribute management page of image manually, if it is set up one it is new
Classification is used for the classification of image.For example, user is in image browsing, the label of the image of a width paint is changed by " drawing "
It is changed into " my picture ".Equipment detects modification of the user to image attributes, determines the label of image and should be modified.
Equipment judges whether user there occurs special operational when arranging image, if it is sets up a new classification and uses
In the classification of image.For example, user establishes a new folder when arranging image, and this document is pressed from both sides be named as " I
Draw ", and one group of image is moved in this document folder.Equipment detects new file and is established, and has image to be moved
Into this document folder, equipment determines that the label of this group of image should be modified.
Equipment judges whether user has carried out images share when using social software, can share and household in family group
Related photo, the photo related to pet is shared at family in pet exchange group, can share related to books in reading group
Photo, equipment is operated by analyzing this kind of user, and the image in user's photograph album and social networks are associated, image is determined
Label should be modified.
Step 2:The generation of personalization categories.
When equipment judges that the label of image is modified, then new class declaration is generated.The category is endowed one uniquely
Symbol, uniquely accords with consistent image for same category of image.For example, the image of the paint in step one is assigned to one
Unique symbol, its entitled " my picture ";Share the image in family group and be assigned to a unique symbol, its entitled " family
Group ".Similar, other are also distributed to a unique symbol by the image shared in other groups, and its title may be " pet "
Or " reading ".
Step 3:Judge the intensity of variation of personalization categories.
The title of device analysis personalization categories, judges its intensity of variation, so that it is determined that realizing personalization categories classification
Mode.
Such as entitled " the white cute pet " of property classification one by one, device analysis goes out the category and is made up of two elements,
One is color attribute " white ", secondly being object classification " pet ".The default subclass of equipment includes " white " classification and " doted on
Thing " classification, then equipment two subclasses are associated, it is all be classified as " white " and for " pet " image all by again
It is divided into " white cute pet ".It is achieved in the classification of personalization categories.
If not including " white " classification and " pet " classification in the default subclass of equipment, re -training model is needed.
For example, " white cute pet " image that user arranges is uploaded to high in the clouds by equipment, cloud server is new on the basis of original model
A classification is added, and is trained using the image of upload.After training terminates, the model after renewal is sent back to setting for user
It is standby.When there is new image to appear in the photograph album of user, image is classified using the model after renewal, when image exists
When the confidence level of " white cute pet " classification is more than a threshold value, the image is divided into " white cute pet " classification.
Step 4:Judge the classification uniformity of equipment end and high in the clouds to image.
When high in the clouds from it is different to the classification results of same image in equipment when, it is necessary to be optimized to result.Such as one
The image of " dog ", the classification results in equipment are " cat ", and classification results beyond the clouds are " dog ".
When equipment does not detect the feedback of user.If threshold value is set as 0.9, when the classification confidence in high in the clouds is higher than
0.9, the classification confidence of equipment is less than 0.9, then it is assumed that piece image should be denoted as " dog ".On the contrary, when the classification in high in the clouds is put
Reliability is less than 0.9, and the classification confidence of equipment is higher than 0.9, then image should be denoted as " cat ".When high in the clouds and the classification confidence of equipment
During degree below 0.9, then the classification of image is lifted into a level, be designated as " pet ".
When equipment detects the more positive feedback of user.The result of mistake classification is uploaded to high in the clouds, including by mistake point
Photo, it is not classified not and the correct classification specified of user, and Boot Model is trained.After training terminates, new model is updated to and set
It is standby.
Embodiment 7:Fast browsing in equipment
Embodiment 7 can based on embodiment 5 tree carry out fast browsing.
Step 1:Equipment shows the label classification of a certain level
User is when browsing a certain level, and equipment detects user and a certain level is browsed, and a level is included
All label classifications be shown to user, the mode of display can be word or image thumbnails.When display image thumbnail,
The preset icon of the category can be shown, the image of necessary being in photograph album can also be shown, display can be selected finally to be repaiied
The thumbnail of the image changed, or selection show thumbnail of confidence level highest image, etc. in the category.
Step 2:The operation of equipment detection user and feedback
User can be operated in each label classification, hence into next level.
Figure 19 is according to embodiment of the present invention, to the fast browsing schematic diagram of tree structure on mobile terminal.
As shown in figure 19, when one label of user click, equipment detects a label and clicked, then shows the label
Next level.For example, user click landscape label, equipment detects landscape label and clicked, then ocean under landscape label,
High mountain, inland water scenery, the label in desert are displayed to user.When user further clicks inland water scenery, equipment detects interior
Lu Shuijing labels are clicked, and the waterfall, river, lake label under the label are displayed to user.
User can be operated in each label classification, check all images included in certain label classification.
As shown in figure 19, when one label of user's long-press, equipment detects a label and is long pressed, then shows the label
All images.When user's long-press landscape label, equipment detects user's long-press landscape label, then is landscape by all labels
Image be shown to user, including ocean, high mountain, the image in inland water scenery and desert.When user's long-press inland water scenery label,
Equipment detects user's long-press inland water scenery label, then all labels is shown into user, including waterfall for the image of inland water scenery
The image of cloth, lake and river.When user's long-press waterfall, equipment detects waterfall label and is long pressed, and is waterfall by all labels
The image of cloth is shown to user.
User can also be operated by way of voice.Such as user by phonetic entry " enter inland water scenery ",
Equipment detects user speech input " entering inland water scenery ", determines the operation of user " to enter by natural-sounding processing equipment
Enter ", the object of operation is " inland water scenery ", then the waterfall under the water scenery label of inland, river, lake label are shown to use by equipment
Family.If user is by phonetic entry " checking inland water scenery ", equipment detects the phonetic entry " checking inland water scenery " of user,
Determine that the operation of user is " checking " by natural-sounding processing equipment, the object of operation is " inland water scenery ", then by all marks
The image signed as inland water scenery is shown to user, includes the image of waterfall, lake and river.
In the present embodiment, image is classified by this visual mode of thumbnail, user can basis
Classification quickly finds image, so as to improve the speed for browsing and retrieving.
Embodiment 8:Fast browsing on compared with the small screen
The screen very little of some electronic equipments, present embodiment 8 provides following scheme.
Embodiment 8 can the tree based on embodiment 5.
Step 1:Equipment shows the label classification of a certain level
User is when browsing a certain level, and equipment detects user and a certain level is browsed, by the portion in a level
Minute mark label classification is shown to user, and the mode of display can be word or image thumbnails., can be with when display image thumbnail
The preset icon of the category is shown, the image of necessary being in photograph album can also be shown, can select to show what is finally changed
The thumbnail of image, or selection show thumbnail of confidence level highest image, etc. in the category.
Figure 21 A- Figure 21 B are that according to embodiment of the present invention, the fast browsing of tree structure is illustrated on small screen device
Figure.
As illustrated in fig. 21, when user browses the level being made up of the vehicles, pet and landscape, equipment detects this
Level is accessed, then only shows the thumbnail of one of classification, such as vehicles, pet or landscape on screen each time.
Step 2:The operation of equipment detection user and feedback
User can be operated in each label classification, so as to be switched between each label classification.As schemed
Shown in 21A, equipment initially shows the class label of the vehicles, and user's stroke on device screen, equipment detects user
Stroke action on device screen, then switch to pet class label, when equipment is examined again by vehicles class label
When measuring the stroke action of user, landscape class label is switched to by pet class label.
It should be noted that operation corresponding with switch labels can also use other modes, it is merely illustrative of herein.
User can be operated in each label classification, check all images included in certain label classification, specifically
Each display portion image during display, user shows other parts image by operating.
As illustrated in fig. 21, when one label of user click, equipment detects a label and clicked, then shows the mark
Wherein one in all images of label.Such as user click landscape label, equipment detects landscape label and clicked, then landscape
An image containing sandy beach landscape under label is displayed to user, when equipment detects the stroke action of user, display
Another image under landscape label.
It should be noted that operation corresponding with switching image can also use other modes, it is merely illustrative of herein.
User can be operated in each level, so as to be switched between each level.When equipment detects use
During the first operation at family, into next level;When equipment detects second of operation of user, last layer level is returned.
As illustrated in fig. 21b, equipment is in level where landscape, the vehicles.When equipment shows the label of the vehicles,
User turns clockwise dial plate, and equipment detects dial plate and turned clockwise, then enters traffic by landscape, vehicles level
Next level of instrument, wherein including the labels such as bicycle, aircraft.User can be by stroke switch labels classification, such as from certainly
Driving is switched to aircraft.When user's rotate counterclockwise dial plate, equipment detects the rotate counterclockwise of dial plate, then by bicycle,
Level where aircraft switches to last layer level, and last layer level includes the label classifications such as landscape, the vehicles.Need explanation
It is that operation corresponding with switching level can also use other modes, be merely illustrative of herein.
Similarly, user can also be operated by way of voice.For example user " enters inland by phonetic entry
Water scenery ", equipment detects the phonetic entry " entering inland water scenery " of user, and the behaviour of user is determined by natural-sounding processing equipment
As " entrance ", the object of operation is " inland water scenery ", then equipment is by the waterfall under the water scenery label of inland, river, lake label
It is shown to user.If user is by phonetic entry " checking inland water scenery ", equipment detects the phonetic entry of user " in checking
Lu Shuijing ", determines that the operation of user is " checking ", the object of operation is " inland water scenery ", then by natural-sounding processing equipment
All labels are shown to user for the image of inland water scenery, include the image of waterfall, lake and river.For another example, user passes through
Phonetic entry " returns to last layer level ", and equipment detects user speech input " returning to last layer level ", then switches to last layer level.
It should be noted that above-mentioned phonetic entry can also be other voices, it is merely illustrative of herein.
Embodiment 9:The display image on compared with the small screen
The screen of some electronic equipments is smaller, and user checks the photograph in other equipment or high in the clouds possibly through these equipment
Piece.In order to carry out fast browsing on these electronic equipments, present embodiments provide for following scheme.
Step 1:Equipment judges the number of the user-interested region in image to be displayed
Equipment checks the number of the user-interested region included in the image according to the zone list of piece image, according to
The different display mode of the different choice of user-interested region number.
Step 2:User-interested region number of the equipment in image to be displayed judges display mode
Equipment detects the number of the user-interested region included in image, is selected according to the difference of user-interested region number
Select different display modes.
Figure 22 is according to embodiment of the present invention, to the display schematic diagram of image on small screen device.
As shown in figure 22:
When equipment, which is detected, does not include any user-interested region in a landscape image, the thumbnail of whole figure is shown
On screen, according to the difference of device screen, a part for artwork is intercepted when necessary, such as when device screen is circle,
In the inscribed circle that the center interception of image is maximum.
When equipment, which is detected, contains user-interested region in an image, choose one of user-interested region and occupy
In be shown on device screen, the criterion of selection can be according to the viewpoint thermal map of user, user's attention rate highest user interest
Region is displayed by priority, the classification confidence level that the criterion of selection can also be in region, is chosen classification confidence level highest and is used
Family interest area preference is shown.
Step 3:The different operating and feedback of equipment detection user
User carries out different operations in equipment, and equipment detects different operations, and according to different operations, equipment is provided
Different feedbacks.These operations should allow user to amplify, downscaled images, when including multiple user interest areas in image
During domain, user can be switched over by certain operation between each user-interested region.
For example, two fingers are done close to during motion on screen as user, the finger that equipment detects user is done close to fortune
It is dynamic, then the image shown on screen is reduced, untill the long side of image and the short side of equipment are isometric.
For another example, when as user, two fingers are done away from motion on screen, the finger that equipment detects user is done away from fortune
It is dynamic, then amplify the image shown on screen, untill image is enlarged into the specific factor of artwork, this multiple can be prior
Setting.
For another example, as shown in figure 22, when user rotates dial plate, when equipment detects dial plate and rotated, different users is emerging
Interesting region, which is centered, to be shown on screen.When user turns clockwise dial plate, equipment detects dial plate and turned clockwise, then
It is shown centered on next user-interested region;When user's rotate counterclockwise dial plate, equipment detects dial plate and turned clockwise,
Then it is shown centered on a user-interested region.
By present embodiment, user can smoothly check picture in the less equipment of screen.
Embodiment 10:Image transmitting (one) based on Graphical User interest region
At present, people more and more store the image on high in the clouds, are checked present embodiments provide for one kind in equipment
The scheme of high in the clouds image.
Step 1:Equipment determines transmission mode according to certain criterion
Equipment selects certain transmission mode by the environment or condition judgment residing for equipment.Described environment or condition herein
The amount of images that equipment is asked to high in the clouds or other equipment can be included.
Transmission mode mainly includes two kinds, and one kind is complete transmission, and another is Adaptive Transmission.Complete transmission pattern will
Data are without being compressively all transferred to equipment, and Adaptive Transmission is using the data through overcompression and is transmitted several times and reaches saving band
The purpose of wide and power consumption.
Figure 23 is the transmission mode schematic diagram in the case of different transmission quantities according to embodiment of the present invention.
As shown in figure 23, when transmitting image, given threshold (i.e. threshold value) N first.N can be preset value, and such as N is
10.N determination can also be calculated according to picture size and requested amount of images and obtained, and N values are to meet disposable complete biography
The flow that defeated N width image is consumed is less than the maximum of this condition of Adaptive Transmission N width images.
When the amount of images that equipment detects user's request is less than N, image is transmitted by complete transmission pattern.Work as equipment
When the amount of images for detecting user's request is more than N, image is transmitted by Adaptive Transmission pattern.
Step 2:Image is transmitted by complete transmission pattern
When the amount of images that equipment detects user's request is less than N, image is transmitted by complete transmission pattern.It is now right
Original image is given request equipment by image waiting for transmission without any compression or processing by network complete transmission.
Step 3:Image is transmitted by Adaptive Transmission pattern
Under Adaptive Transmission pattern, requested N width image carries out full figure compression beyond the clouds or in other equipment first
Reach the purpose for reducing transmitted data amount.Such as compression picture size or the bigger compression algorithm of selection compression ratio.Compressed
N width image afterwards is transferred to request equipment by network connection, for user's preview.
When user's selection further browses the part or all of image in N width images, user equipment detects a secondary figure
Picture A is opened by full frame, the image that now user equipment compresses to high in the clouds or other equipment requested part.High in the clouds or other equipment connect
Receive Partial shrinkage A it is requested after, A artwork is compressed, the principle of compression is part where user-interested region
Image is by the compression algorithm compared with low compression ratio, and the background parts outside user-interested region are by the compression algorithm of larger compression ratio.Cloud
End or other equipment are by network by the image transmitting of Partial shrinkage to user equipment.
As shown in figure 23, the user-interested region of the image of user's request is aircraft and automobile, then aircraft and automobile place
Region is by the less compression algorithm of compression ratio, so that user can be with the details of relatively sharp viewing aircraft and automobile, aircraft
With the region outside automobile region by the larger compression algorithm of compression ratio, so as to reach the purpose for saving flow.
When user further operates to image, edlin, amplification are such as entered to image and browses, share, or user
When directly asking artwork, user equipment asks un-compressed artwork to high in the clouds or other equipment.High in the clouds or other equipment are received
To user equipment request when, un-compressed artwork is sent to user equipment.
By present embodiment, the transmission quantity of equipment can be restricted in certain limit, reduce volume of transmitted data.And
And, if the image of transmission is excessive, the quality of reduction transmission image enables a user to fast browsing to the figure needed
Picture.
Embodiment 11:Image transmitting (two) based on Graphical User interest region
At present, people more and more store the image on high in the clouds, are checked present embodiments provide for one kind in equipment
The scheme of high in the clouds image.
Step 1:Equipment determines transmission mode according to certain criterion
Equipment selects certain transmission mode by the environment or condition judgment residing for equipment.Described environment or condition herein
Can be the network connection classification residing for equipment, such as cable broadband (WIFI) network, operators communication network, cable network,
Picture quality of request that the quality (such as express network, slow network) of network residing for equipment, user are set manually, etc..
Transmission mode mainly includes three kinds.The first is complete transmission;Second is that Partial shrinkage is transmitted;The third has been
Full compression is transmitted.Data are transferred to equipment by complete transmission pattern without compressively whole;Partial shrinkage transmission mode is by data
Equipment is transferred to after carrying out Partial shrinkage;Complete compressed mode transmission carries out data to be transferred to equipment after compressing completely.
Figure 24 is the transmission mode schematic diagram in the case of different network environments according to embodiment of the present invention.
As shown in figure 24, when equipment is under WIFI network or cable network environment, without considering to cause during data transfer
Expense, when equipment detect user request image when, pass through complete transmission pattern transmit image.
As shown in figure 24, when equipment is under operators communication network environment, it is necessary to the expense caused when considering data transfer
With, when equipment detect user request image when, can pass through complete transmission pattern, Partial shrinkage transmission mode or completely compression
Transmission mode is by image transmitting to equipment, and the method for selection can set the transmission mode given tacit consent to realize, or be selected by user
Transmission mode.Pass through present embodiment, it is possible to reduce user is in the volume of transmitted data under carrier network environment.
Equipment can also determine certain transmission mode of selection by judging network quality, and such as network quality is selected when preferable
Whole transmission mode is taken, selected part compressed mode transmission when network quality is general chooses compression completely when network quality is poor
Transmission mode.By present embodiment, user can be made quickly to browse the image of needs as far as possible.
Step 2:Image is transmitted by complete transmission pattern
When transmitting image by complete transmission pattern, cloud device is to image waiting for transmission without any compression or place
Reason, by original image by network complete transmission to user equipment.
Step 3:Image is transmitted by Partial shrinkage pattern
When transmitting image by Partial shrinkage pattern, the figure that user equipment compresses to high in the clouds or other equipment requested part
Picture.High in the clouds or other equipment are received after request, and image is compressed, and the principle of compression is part where user-interested region
Image by the compression algorithm compared with low compression ratio, the background parts outside user-interested region are by the compression algorithm of larger compression ratio.
High in the clouds or other equipment are by network by the image transmitting of Partial shrinkage to user equipment.
As shown in figure 24, the user-interested region of the image of user's request is aircraft and automobile, then aircraft and automobile place
Region is by the less compression algorithm of compression ratio, so that user can be with the details of relatively sharp viewing aircraft and automobile, aircraft
With the region outside automobile region by the larger compression algorithm of compression ratio, so as to reach the purpose for saving flow.
Step 4:Pass through complete compressed mode transmission image
Requested image carries out full figure compression and reaches the mesh for reducing transmitted data amount beyond the clouds or in other equipment first
, such as compression picture size or the bigger compression algorithm of selection compression ratio.Passed by compressed images by network connection
Request equipment is defeated by, for user's preview.
Wherein, the transmission mode determined based on step 1, is optionally performed Step 2: step 3 or step 4.
Embodiment 12:Quickly sharing under thumbnail mode
Step 1:It is determined that sharing candidate image
Sharing the determination of candidate image can be automatically performed by equipment, and completion can also be manually selected by user.
When equipment, which is automatically determined, shares candidate image, equipment determines to share Candidate Set, equipment by analyzing picture material
Class label in the user-interested region of each in detection image, a candidate is constituted by the image for including identical category label
Set, for example, constitute a candidate collection by all images comprising pet.
Equipment determines that sharing each class label in Candidate Set, equipment detection image is by the contact person occurred in image
Identity in the user-interested region of people, the image that same correspondents or contact person are grouped is defined as a candidate collection.
Equipment can also determine a period, and shooting time is fallen within to the image in shooting time section as sharing time
Choosing, the equipment that is set by of the period is analyzed information such as shooting time, geographical position.Period can be pre-
First set, such as every 24 hours are period, shoot the image within each 24 hours and be set to one and share time
Selected works.
Period can determine that equipment detects equipment and is in first first moment according to the change in geographical position
Individual geographical position, second geographical position is in second moment, and the 3rd geographical position, first are in the 3rd moment
Individual geographical position and the 3rd geographical position can be same positions, then equipment will be set as second moment to the 3rd the period
The individual moment.For example, equipment detect certain moon 1 equipment be located at Beijing, certain moon 2 equipment be located at Nanjing, certain moon 3 equipment position
In Beijing, then equipment will be set to 2 to 3 the period, and the image that shooting time is in 2 to 3 is set to one
Share Candidate Set.Equipment, can be by detecting that the distance in each geographical position is sentenced when judging whether geographical position changes
It is disconnected.For example, after equipment present position changes certain distance, judging that device location has changed, this distance can be set in advance
It is fixed, such as 20 kilometers.
When user, which manually selects determination, shares candidate image, user carries out operation selection on thumbnail will share figure
Picture, such as long-press image, when equipment detect the operation of user, the image operated are added to and shares candidate image set
In.
Step 2:Equipment points out user to be shared under thumbnail mode
It is in when equipment detects equipment under thumbnail mode, equipment will share candidate collection by certain mode and be prompted to
User.For example, the frame of the thumbnail same color of the image of same candidate collection is surrounded.Show in these candidate collections
Show that is shared a button, when user clicks on this button, equipment, which is detected, to be shared button and be clicked, and opens sharing model.
Step 3:Share candidate and share set
Candidate, which shares set, can individually be shared with other contact persons, and equipment will include the images share of certain contact person
Give the contact person.Equipment determines to share first which contact person included in candidate collection in each image, then by image point
The contact person included in the image is not sent to.
Figure 25 is, according to embodiment of the present invention, the first schematic diagram of image to be shared under thumbnail interface.
As shown in figure 25, image 1 and image 2 are defined as a candidate and share set by equipment, and are detected in image 1 and wrapped
Containing contact person 1 and contact person 2, contact person 1 and contact person 3 are included in image 2.
When user, which clicks on, is shared with each contact person, image 1 and image 2 are sent to contact person 1 by equipment, by image 1
Contact person 2 is issued, image 2 is sent to contact person 3.It is produced during so as to avoid user that identical image is sent into different user
Repeat.
Candidate, which shares set, can be shared with group of contacts in bulk, and equipment will include the image point of each contact person
Enjoy to the group comprising each contact person.Equipment determines to share first the contact person included in candidate collection in each image, so
The contact person included afterwards in contact person's packet is checked whether during contact person is grouped is with sharing the connection that candidate collection is included
It is that people is completely the same, if so, then automatic will share the images share that candidate collection includes and give contact person packet, or by user's hand
Share after dynamic modification contact person.If equipment can not find the contact person completely the same with sharing candidate collection and be grouped, newly-built one
Individual contact person's packet includes the contact person shared in candidate collection, and will be supplied to user based on contact person packet, makes
User can be with the contact person in the manual modification packet, equipment, which is set up, will share candidate collection after new contact person's packet and include
Image be sent to contact person packet in.
Figure 26 A- Figure 26 C are, according to embodiment of the present invention, the second schematic diagram of image to be shared under thumbnail interface.
As shown in fig. 26, image 1 and image 2 are defined as a candidate and share set by equipment, and are detected in image 1
Include in contact person 1 and contact person 2, image 2 and include contact person 1 and contact person 3.As shown in fig. 26b, divide when user clicks on
Enjoy when being grouped to contact person, the contact person that equipment is detected in contact person's packet includes and contains only contact person 1, contact
People 2 and contact person 3.As shown in Figure 26 C, image 1, image 2 are sent to the contact person and are grouped by equipment.
Step 4:What candidate collection was shared in modification shares state
After sharing the image in candidate collection and being shared, equipment points out user under thumbnail mode by certain mode
That analyzes candidate collection shares state, for example, inform that customer analysis candidate collection is shared with contact person individual, connection by icon
It is people's packet, shares number of times etc..
By present embodiment, the efficiency of images share is improved.
Embodiment 13:Quickly sharing under Chat mode
Step 1:Equipment, which is produced, shares candidate collection
Similar with embodiment 11, equipment determines to divide by analyzing the information such as picture material, shooting time, geographical position
Candidate Set is enjoyed, present embodiment 13 is repeated no more to this.
Step 2:Equipment points out user to be shared under Chat mode
Equipment detects equipment and is under Chat mode, then extracts the contact person that user is communicating, share time at each
Selected works are contrasted in closing, if one to share the contact person included in candidate collection consistent with the contact person that user is communicating, and
This is shared candidate collection and not shared, then points out user to be shared by certain mode.
Figure 27 is to share schematic diagram according to embodiment of the present invention, first under chat interface.
As shown in figure 27, when equipment detects the group of contacts that user constitutes with contact person 1, contact person 2 and contact person 3
During chat, equipment, which finds in equipment existing one, to be shared candidate collection and includes contact person 1, contact person 2 and contact person 3.
Equipment ejects a prompting frame, and the thumbnail of image in candidate collection is shared in display, when detect user click on confirmation share by
During button, the image in the analysis candidate collection is sent to current group chatting.
Equipment detects equipment and is under Chat mode, automatically analyzes the input of user, by natural language processing, judges
Whether user has the wish for sharing image, if the wish shared then analyzes the content to be shared of user, equipment ejection one
Individual prompting frame, the user-interested region for the content that display label classification is wanted to share for user, its arrangement mode can be the time
Sequentially, user browses frequency etc..Wherein one or more images are chosen when equipment detects user and when clicking on transmission, will be wrapped
Image or interception containing user-interested region go out user-interested region and are sent in group.
Figure 28 is to share schematic diagram according to embodiment of the present invention, second under chat interface.As shown in figure 28,
User input " you like this car ", equipment detects the input of user, by analysis judge user have share automobile this
The wish of one label classification.Equipment ejects a prompting frame, and display label classification is the user-interested region of automobile, when equipment inspection
When measuring user's one of image of click, the user-interested region intercepted out is sent into group.
By present embodiment, the efficiency of images share is improved.
Embodiment 14:Image method for concentrating based on user-interested region
Step 1:Equipment carries out polymerization separation to the user-interested region in a period of time
Equipment determines a period, and polymerization separation is carried out for the user-interested region in the period.
Period can be to preset, such as every 24 hours are a period, are shot within each 24 hours
Image is set to a polymerization separation Candidate Set.
Period can determine that equipment detects equipment and is in first first moment according to the change in geographical position
Individual geographical position, second geographical position is in second moment, and the 3rd geographical position, first are in the 3rd moment
Individual geographical position and the 3rd geographical position can be same positions, then equipment will be set as second moment to the 3rd the period
Individual moment, such as equipment detect certain moon 1 equipment be located at Beijing, certain moon 2 equipment be located at Nanjing, certain moon 3 equipment position
In Beijing, then equipment will be set to 2 to 3 the period, and the image that shooting time is in 2 to 3 is set to one
It polymerize disengaging time section.Equipment, can be by detecting that the distance in each geographical position is entered when judging whether geographical position changes
Row judges.For example after equipment present position changes certain distance, judge that device location has changed, this distance can be advance
Setting, such as 20 kilometers.
Equipment carries out polymerization separation, equipment detection to user-interested region by analyzing the content of image in a period
Class label in the user-interested region of each in image, the user-interested region for including identical category label is gathered
Close, the user-interested region for including different classes of label is separated, for example, distinguished food, contact person 1, contact person 2
It is polymerize.
Equipment is carried out each in polymerization separation, equipment detection image by the contact person occurred in image to user-interested region
The identity in user-interested region that individual class label is behaved, same correspondents is polymerize, different contact persons are separated.
Step 2:Equipment produces selected set
Mode (1):By the refining process of image to word.
Equipment is selected to user-interested region progress in each polymerization set, and the condition of selection can be to preset,
Such as the time finally shot, the time shot at first.After can also being ranked up according to picture quality, picture quality is chosen best
User-interested region.The user-interested region chosen is spliced.During splicing, automatically according to user interest
Shape and ratio in region adjustment splice template, the artwork that can be linked back in photograph album by image tile.Finally according to user
Picture mosaic is briefly described for the content in interest region, generation.
Figure 29 is according to embodiment of the present invention, by the image method for concentrating schematic diagram of image to word.
As shown in figure 29, equipment chooses the image in the time first, and the user-interested region of these images is carried out
Polymerization separation, produce landscape polymerization set, contact person 1 polymerization set, contact person 2 polymerization set, food polymerization set, flower
Polymerization set.Then, four images are therefrom chosen to be spliced, the main body in user-interested region are shown in splicing
Out.Passage is produced finally according to the content in user-interested region.Equipment detection user clicks on spliced image, can
To link back to the artwork where user-interested region.
Mode (2):By the selected mode of word to image.
User inputs passage, and equipment detects the word of user's input, keyword, the class of keyword are extracted wherein
Type includes time, geographical position, object names, contact identity etc..Equipment is according to the time and Geographic mapping extracted
Image into photograph album, according to object names, contact identity etc., chooses the user-interested region for meeting keyword.It will include
There is the image met belonging to the user-interested region or user-interested region of keyword to be inserted between the word of user's input.
Figure 30 is according to embodiment of the present invention, by the image method for concentrating schematic diagram of word to image.
As shown in figure 30, equipment be extracted in the word that user inputs " today ", " I ", " girlfriend ", " landscape ",
" Nanjing ", " lotus flower ", " food " keyword, image is determined according to these keywords, and selection includes these key words contents
User-interested region, user-interested region is intercepted from image and is out inserted into the word of user's input.
Embodiment 15:The image conversion method of image content-based
Figure 31 is according to embodiment of the present invention, the image transition diagram of image content-based.
Step 1:Equipment detection and clustering document image
Equipment detects the image for including document label in the equipment of user.Utilize the appearance style of document, document
Content judges whether the image for including document label to be derived from same document, for example, include the document map of same PPT templates
As deriving from same document, the word in image is analyzed according to natural language processing, judges whether each image originates
From same document.
The trigger condition for realizing the step can be automatic triggering, and such as equipment monitors image text in photograph album on backstage in real time
The change of part, when monitoring the image file number change in photograph album, such as image file quantity increase, then triggering is implemented to be somebody's turn to do
Step.For another example, in MSN, whether the image that equipment automatic detection user receives is image document, if
Then triggering implement the step, in the dialogue of MSN, detect and clustering document image, wherein, equipment can be
In the interactive information of one contact person, detect and clustering document image, equipment can also be detected in the interactive information of a group
And clustering document image.
Alternatively, realizing the trigger condition of the step can manually trigger for user, and such as one merges file and picture
Button is arranged in the menu of photograph album, when equipment, which detects user, to be clicked on, and the step is implemented in triggering;For another example, exist
In MSN, when equipment detects image and the selection convert documents option that user's long-press is received, the step is triggered.
Step 2:Device prompts user converts the image into document
Under thumbnail mode, by the imagery exploitation from same document, certain mode is shown equipment, such as identical
The rectangle frame of color, and a button is being shown thereon, when user clicks on, equipment detects switching button by point
Hit, then into image convert documents pattern.
In MSN, when including file and picture in the image that equipment detects user's reception, then by certain
The mode of kind gives user to point out, and the mode such as using special color, ejection bubble points out user's image to be converted to document,
While the Show Button.When equipment, which detects user, to be clicked on, then into image convert documents pattern.
Step 3, equipment generates document according to user feedback
Under image convert documents pattern, user can add or delete manually image, and equipment adds according to the operation of user
Plus or delete need to be converted into the image of document, when equipment detect user click on " conversion " button when, equipment is in the picture
Carry out text detection and do optical character identification, be text by the text conversion included in image, and save as a document, supply
User subsequently uses.
Embodiment 16:The intelligence of image content-based, which is deleted, recommends.
Step 1:User-interested region in image judges image similarity
In the image of user-interested region is included, each user-interested region is intercepted, is compared from different images
User-interested region, judges whether the content included in each image is similar.
For example, image 1 includes contact person 1, contact person 2 and contact person 3, image 2 includes contact person 1, the and of contact person 2
Contact person 3, and image 3 includes contact person 1, contact person 2 and contact person 4, then image 1 and image 2 have higher similarity.
For another example, having in image 4 has to include a Red Flowers in a user-interested region for including Red Flowers, image 5
There is a user-interested region spent comprising yellow in user-interested region, image 6, then image 4 and image 5 have higher phase
Like degree.
In this step, when the similarity of user-interested region and the similarity of image of two images are directly proportional, user
The position in interest region and similarity are unrelated.
Step 2:The user-interested region included according to image judges whether image has semantic information
Equipment extracts the area field for the user-interested region that image is included, if image includes with class label
User-interested region, then image, which includes, includes someone, car, pet in semantic information, such as image.If image include without
There is the user-interested region of class label, then image includes weaker semantic information, the border of such as geometric figure.If in image
Without any user-interested region, then image is without semantic information, such as solid-color image, the too low image of exposure.
Step 3:The aesthetic measure of image is judged according to the position relationship of user-interested region in image
Equipment extracts the classification and position coordinates of each user-interested region from the zone list of image, according to each use
The classification and position coordinates in family interest region judge the aesthetic measure of image.Judgment mode can be to utilize golden section rule,
Each user-interested region that such as one image is included, is all seated in golden section point, then the aesthetic measure of the image is more
It is high.For another example, the user-interested region comprising tree is located at the journey attractive in appearance of the surface, the then image of the user-interested region comprising people
Degree is relatively low.
It should be noted that the priority execution sequence of step 1, step 2 and step 3 can be exchanged, it can also perform simultaneously
Two to three in step 1, step 2 and step 3, present embodiment is to this and is not limited.
Step 4:Equipment recommendation is deleted
The high image of similarity is carried out polymerization and recommends to delete by equipment, and equipment will not include in class label or comprising weaker
The image recommendation of semantic information is deleted, and equipment deletes the low image recommendation of aesthetics.Carried out recommending the high image of similarity
During deletion, on the basis of piece image, display and the difference of piece image, facilitate user to choose reservation in each image
Image.
Figure 32 is that, according to embodiment of the present invention, the intelligence of image content-based deletes schematic diagram.As shown in figure 32, can be with
It is using color lump that the distinctive points in each image are highlighted.
Step 5:Equipment checks that image is deleted in user's operation
User's selection in the image for recommending to delete needs retained image, is clicked on after confirmation and deletes button.Equipment is examined
After the operation for measuring user, retain the image that user's selection retains, by other image-erasings.Or, user is recommending what is deleted
The image to be deleted is selected in image, is clicked on after confirmation and deletes button.Equipment is detected after the operation of user, deletes user's selection
Image, retain other images.
By present embodiment, unwanted picture can be quickly deleted.
Based on above-mentioned labor, embodiment of the present invention also proposed a kind of image management apparatus.
Figure 33 is the image management apparatus structure chart according to embodiment of the present invention.
As shown in figure 33, the device 260 includes:
Detection module 261 is operated, for detecting that user is directed to the operation of image;
Management module 262, for based on the user-interested region in operation and image, managing described image.
In summary, embodiment of the present invention mainly includes:(1);The realization side of user-interested region is produced in the picture
Method;(2), based on for interest region picture browsing and retrieval, it is quick the image management such as share in terms of concrete application.
Specifically, embodiment of the present invention can set up zone list for image, wherein browsing comprising each image
Regional is included in frequency, image object classification, concerned degree of regional, etc..When browsing, Yong Huke
To choose multiple user-interested regions in the picture, a variety of operations can be taken for each user-interested region user, for example
Click, double-click, stroke etc., different operations produces different search results as candidate and is supplied to user, candidate image it is suitable
Sequence is ranked up according to the fancy grade of user.In addition, multiple users that user can choose in multiple image from photograph album are emerging
Interesting region is scanned for, and choosing user-interested region in the image that user can also catch from camera in real time scans for, with
This reaches the purpose of fast browsing.Furthermore it is possible to which the image distribution in user's photograph album sets up personalized tree structure, make
User image by more organized, facilitate user's fast browsing.
In image transmitting with sharing aspect, embodiment of the present invention is by Partial shrinkage image, for user-interested region
The compression of low compression ratio is carried out, so as to ensure the definition in the region.Pressure for using high compression ratio outside user-interested region
Contracting, so that the electric quantity consumption and bandwidth resources saved in transmitting procedure.In addition, being set up by analyzing picture material between each width image
Association, facilitate user quickly to share.Such as, in MSN, the input of user is automatically analyzed, is intercepted from image
Relevant range is supplied to user to carry out selection to share, etc..
Embodiment of the present invention also achieves image method for concentrating, including two ways:From image to word and from word
To image.
Embodiment of the present invention also achieves the function that the character image of same source in photograph album is converted to a document.
Embodiment of the present invention, which also achieves intelligence and deleted, recommends, by vision is similar, content is similar, picture quality is low and
Image recommendation not comprising meaningful object is deleted to user.
The better embodiment of the present invention is the foregoing is only, is not intended to limit the invention, it is all the present invention's
Within spirit and principle, any modification, equivalent substitution and improvements done etc. should be included within the scope of protection of the invention.
Claims (26)
1. a kind of image management method, it is characterised in that including:
Detect that user is directed to the operation of image;
Based on the user-interested region in the operation and described image, described image is managed.
2. according to the method described in claim 1, it is characterised in that the operation is included at least two user-interested regions
Selection operation;Wherein described at least two user-interested region belongs to same piece image or belongs to different images;
The management image, including:
Based on the selection operation at least two user-interested regions, there is provided corresponding image and/or frame of video.
3. according to the method described in claim 1, it is characterised in that
The operation includes the selection operation and/or retrieval content input operation to user-interested region;In wherein described retrieval
Holding input operation includes word input operation and/or phonetic entry operation;
The management image, including:
Based on the selection operation and/or retrieval content input operation, there is provided corresponding image and/or frame of video.
4. according to the method in claim 2 or 3, it is characterised in that based on the selection operation and/or retrieval content input
Operate and include at least one in following there is provided corresponding image and/or frame of video:
When the selection operation be the first kind selection operation when there is provided corresponding image and/or frame of video in include:Described
The corresponding user-interested region of all user-interested regions that one class selection operation is directed to;
When the selection operation be Equations of The Second Kind selection operation when there is provided corresponding image and/or frame of video in include:Described
The corresponding user-interested region of at least one user-interested region that two class selection operations are directed to;
When the selection operation be the 3rd class selection operation when there is provided corresponding image and/or frame of video in do not include:It is described
The corresponding user-interested region of user-interested region that 3rd class selection operation is directed to;
When the retrieval content input operation be when the first kind retrieves content input operation there is provided corresponding image and/or regard
Included in frequency frame:The corresponding user interest area of all user-interested regions that the first kind retrieval content input operation is directed to
Domain;
When the retrieval content input operation be when Equations of The Second Kind retrieves content input operation there is provided corresponding image and/or regard
Included in frequency frame:The corresponding user interest of at least one user-interested region that the Equations of The Second Kind retrieval content input operation is directed to
Region;
When the retrieval content input operation be when the 3rd class retrieves content input operation there is provided corresponding image and/or regard
Do not include in frequency frame:The corresponding user-interested region of user-interested region that the 3rd class retrieval content input operation is directed to.
5. according to the method in claim 2 or 3, it is characterised in that after corresponding image and/or frame of video are provided,
This method also includes:
Determine the priority of the corresponding image and/or frame of video;
Display order is determined according to the priority of the corresponding image and/or frame of video;
Corresponding image and/or frame of video are shown according to display order.
6. method according to claim 5, it is characterised in that the determination corresponding image and/or frame of video
Priority include it is following at least one:
The preferential of the corresponding image and/or frame of video is determined based on one in the related data counted in full figure aspect
Level;
The excellent of the corresponding images and/or frame of video is determined based at least two in the related data counted in full figure aspect
First level;
The preferential of the corresponding image and/or frame of video is determined based on one in the related data counted in object aspect
Level;
The excellent of the corresponding images and/or frame of video is determined based at least two in the related data counted in object aspect
First level;
Semantic combination based on object determines the priority of the corresponding image and/or frame of video;
Relative position based on object determines the priority of the corresponding image and/or frame of video.
7. according to the method in claim 2 or 3, it is characterised in that the selection operation to user-interested region be
Detected under following at least one pattern:
Camera preview mode;
Picture browsing pattern;
Thumbnail browse mode.
8. according to the method described in claim 1, it is characterised in that the management image includes:
It is determined that object to be shared;Share described image to the object to be shared;And/or,
Chat content based on chatting object or with chatting object, it is determined that image to be shared, shares described to the chatting object
Image to be shared.
9. according to the method described in claim 1, it is characterised in that the management image, including it is following at least one:
User-interested region based on described image determines group of contacts to be shared;Group point is carried out to image based on user
The operation enjoyed, the group of contacts to be shared is shared with by described image by group's mode;
User-interested region based on described image determines contact person to be shared;Image is individually shared based on user
Operation, each contact person to be shared is sent respectively to by described image, wherein, included in the image for being shared with each contact person
User-interested region corresponding with the contact person;
When the chat sentence of user and chatting object and the corresponding user-interested region of image, using described image as sharing
Candidate Recommendation is to user;
When chatting object is corresponding with the user-interested region in image, using described image as share Candidate Recommendation to use
Family.
10. method according to claim 8 or claim 9, it is characterised in that this method also includes:
After image is shared, the image shared is labeled according to the contact person shared.
11. according to the method described in claim 1, it is characterised in that the management image include it is following at least one:
When display screen is less than preliminary dimension, the classification image or classification word of user-interested region are shown, and based on user
Handover operation, switching show user-interested region other classification images or classification word;
When display screen is less than preliminary dimension and selection operation based on user chooses the classification of user-interested region, display should
The image of classification, and the handover operation based on user, switching show other images of the category;
When display screen is less than preliminary dimension, the number based on user-interested region shows described image.
12. method according to claim 11, it is characterised in that described when display screen is less than preliminary dimension, is based on
The number of user-interested region shows that described image includes:
When not including user-interested region in image, it is condensed to show with described with thumbnail mode display image or by described image
The adaptable size of display screen curtain is shown;
When including a user-interested region in image, the user-interested region is shown;
When image includes multiple user-interested regions, alternating shows each user-interested region in the image, or, display
The first user-interested region in the image, based on user's handover operation, switching is shown in the image except first user is emerging
User-interested region outside interesting region.
13. according to the method described in claim 1, it is characterised in that when carrying out equipment room image transmitting, the management image
Including:
Based on the user-interested region in image transmitting parameter and image, the figure for handling and transmitting after compression is compressed to image
Picture;And/or
The image that the reception server, base station or user equipment are sent, described image is to be based on image transmitting parameter and user interest
Region is compressed the image after processing.
14. method according to claim 13, it is characterised in that image is compressed processing include it is following at least
One:
When image transmitting parameter meets user-interested region not contractive condition, to the user-interested region in image to be transmitted it
Outer image-region is compressed processing, and the user-interested region in image to be transmitted is handled without compression;
When image transmitting parameter meets difference contractive condition, to the image district outside the user-interested region in image to be transmitted
Domain carries out the compression with the first compression ratio and handled, and carries out having the second compression ratio to the user-interested region in image to be transmitted
Compression processing, wherein second compression ratio be less than the first compression ratio;
When image transmitting parameter meets indifference contractive condition, to the image outside the user-interested region in image to be transmitted
Region and the user-interested region in image to be transmitted, carry out the compression processing of identical compression ratio;
When image transmitting parameter meets not contractive condition, compression processing is not performed to image to be transmitted;
When image transmitting parameter meets multiple contractive condition, compression processing is performed to image to be transmitted and one or more times at transmission
Reason.
15. method according to claim 14, it is characterised in that described image configured transmission includes:Picture number to be transmitted
At least one in amount, transmission network species and transmission network quality;
This method include it is following at least one:
When amount of images to be transmitted is less than the first predetermined threshold value, process decision chart not compressor bar as described in being met configured transmission
Part;
When amount of images to be transmitted is more than or equal to first threshold value and is less than the second predetermined threshold value, process decision chart picture is passed
Defeated parameter meets the user-interested region contractive condition, wherein the second threshold value is more than the first threshold value;
When amount of images to be transmitted is more than or equal to second threshold value, process decision chart is as configured transmission the is met user interest
Region indifference contractive condition;
When transmission network quality assessed value be less than three predetermined threshold value when, process decision chart as configured transmission is met it is described repeatedly
Contractive condition;
When the assessed value of transmission network quality is optionally greater than the 3rd threshold value and is less than four predetermined threshold value, judge
Image transmitting parameter meets the difference contractive condition, wherein the 4th threshold value is more than the 3rd threshold value;
When transmission network species is free nets, process decision chart not contractive condition as described in being met configured transmission.
16. according to the method described in claim 1, it is characterised in that the management image includes:
Selected image is selected based on the user-interested region;
Based on the selected image generation picture mosaic, wherein showing the user-interested region of each selected image in the picture mosaic.
17. method according to claim 16, it is characterised in that methods described also includes:
Detect selection operation of the user to user-interested region in the picture mosaic;
The selected image of user-interested region of the display comprising the selection.
18. according to the method described in claim 1, it is characterised in that the management image includes:
Detect user's input text;
Retrieval includes the image of the user-interested region associated with the text;
The image of the user-interested region retrieved is inserted into user's input text.
19. according to the method described in claim 1, it is characterised in that this method also includes:
When judging that multiple images come from identical document, the multiple image is polymerized to document automatically, or touch based on user
The multiple image is polymerized to document by hair operation.
20. according to the method described in claim 1, it is characterised in that the management image include it is following at least one:
Classification comparative result based on the user-interested region in different images, is automatically deleted image or recommends to delete image;
The semantic information including degree of respective image is determined based on the user-interested region in different images, it is semantic based on different images
The comparative result of information including degree is automatically deleted image or recommends to delete image;
Respective image is scored based on the relative position between respective user-interested region in different images, and based on scoring
As a result it is automatically deleted image or recommends to delete image;
Absolute position based at least one of different images user-interested region is scored respective image, and based on scoring
As a result it is automatically deleted image or recommends to delete image.
21. according to the method described in claim 1, it is characterised in that the management image include it is following at least one:
Determine image or the personality type of user-interested region;
Default disaggregated model is adjusted, to realize that the disaggregated model can be classified according to the personality type;
Using the disaggregated model after the adjustment, personalized classification is carried out to image or user-interested region.
22. method according to claim 21, it is characterised in that the default disaggregated model of adjustment includes:
When the pre-set categories of the disaggregated model of equipment local side include the personality type, in the classification mould of equipment local side
Pre-set categories are reconfigured in type;
When the pre-set categories of the disaggregated model of equipment local side do not include the personality type, in the classification of equipment local side
Increase the personality type in model;
When the pre-set categories of the disaggregated model in high in the clouds include the personality type, reconfigured in disaggregated model beyond the clouds
Pre-set categories;
When the pre-set categories of the disaggregated model in high in the clouds do not include the personality type, institute is increased in disaggregated model beyond the clouds
State personality type.
23. method according to claim 21, it is characterised in that personalized classification is carried out to image or user-interested region
Afterwards, this method also include it is following at least one:
Equipment local side receives the classification error feedback information that user provides, using the classification error feedback information to equipment sheet
Disaggregated model after the adjustment of ground terminal is trained;
High in the clouds receive user provide classification error feedback information, using the classification error feedback information to the adjustment in high in the clouds after
Disaggregated model be trained;
When the personalized classification results in high in the clouds and the inconsistent personalized classification results of equipment local side, the individual character in high in the clouds is utilized
Change the personalized classification results of classification results more new equipment local side, and classification error feedback information is sent to high in the clouds.
24. the method according to any one of claim 1-23, it is characterised in that the user-interested region includes following
In at least one:
Corresponding to the image-region of manual focus position;
Corresponding to the image-region of auto-focusing position;
Object area;
Viewpoint thermal map region;
Conspicuousness thermal map region.
25. the method according to any one of claim 1-24, it is characterised in that this method also includes:
Class label is generated based on object area testing result;And/or
The user-interested region is inputted into object classification device, the output result generation classification mark based on the object classification device
Label.
26. a kind of image management apparatus, it is characterised in that including:
Detection module is operated, for detecting that user is directed to the operation of image;
Management module, for based on the user-interested region in the operation and described image, managing described image.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170148051A KR20180055707A (en) | 2016-03-29 | 2017-11-08 | Image management method and apparatus thereof |
EP17871827.6A EP3491504A4 (en) | 2016-11-16 | 2017-11-16 | Image management method and apparatus thereof |
US15/814,972 US20180137119A1 (en) | 2016-11-16 | 2017-11-16 | Image management method and apparatus thereof |
PCT/KR2017/013047 WO2018093182A1 (en) | 2016-11-16 | 2017-11-16 | Image management method and apparatus thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2016101867662 | 2016-03-29 | ||
CN201610186766 | 2016-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107239203A true CN107239203A (en) | 2017-10-10 |
Family
ID=59983716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611007300.8A Pending CN107239203A (en) | 2016-03-29 | 2016-11-16 | A kind of image management method and device |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20180055707A (en) |
CN (1) | CN107239203A (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107908337A (en) * | 2017-12-14 | 2018-04-13 | 广州三星通信技术研究有限公司 | Share the method and apparatus of picture material |
CN108182404A (en) * | 2017-12-28 | 2018-06-19 | 上海传英信息技术有限公司 | A kind of method for picture sharing and photo share system based on intelligent terminal |
CN108230283A (en) * | 2018-01-19 | 2018-06-29 | 维沃移动通信有限公司 | A kind of textures material recommends method and electronic equipment |
CN108494947A (en) * | 2018-02-09 | 2018-09-04 | 维沃移动通信有限公司 | A kind of images share method and mobile terminal |
CN108650524A (en) * | 2018-05-23 | 2018-10-12 | 腾讯科技(深圳)有限公司 | Video cover generation method, device, computer equipment and storage medium |
CN108805867A (en) * | 2018-05-25 | 2018-11-13 | 北京百度网讯科技有限公司 | Method and apparatus for exporting tobacco leaf degree information |
CN109189880A (en) * | 2017-12-26 | 2019-01-11 | 爱品克科技(武汉)股份有限公司 | A kind of user interest classification method based on short text |
CN109410163A (en) * | 2018-10-23 | 2019-03-01 | 北京旷视科技有限公司 | Recommended location of taking pictures acquisition methods, device, terminal and computer storage medium |
CN109432779A (en) * | 2018-11-08 | 2019-03-08 | 北京旷视科技有限公司 | Adjusting of difficulty method, apparatus, electronic equipment and computer readable storage medium |
CN109963071A (en) * | 2017-12-26 | 2019-07-02 | 深圳市优必选科技有限公司 | A kind of method, system and the terminal device of automatic editing image |
CN109992568A (en) * | 2019-03-31 | 2019-07-09 | 联想(北京)有限公司 | A kind of information processing method and device |
CN110012341A (en) * | 2019-04-17 | 2019-07-12 | 北京华宇信息技术有限公司 | Video evidence methods of exhibiting shows device and electronic equipment |
CN110020086A (en) * | 2017-12-22 | 2019-07-16 | 中国移动通信集团浙江有限公司 | A kind of user draws a portrait querying method and device |
CN110045892A (en) * | 2019-04-19 | 2019-07-23 | 维沃移动通信有限公司 | Display methods and terminal device |
CN110070107A (en) * | 2019-03-26 | 2019-07-30 | 华为技术有限公司 | Object identification method and device |
CN110209916A (en) * | 2018-02-05 | 2019-09-06 | 高德软件有限公司 | A kind of point of interest image recommendation method and device |
CN110516083A (en) * | 2019-08-30 | 2019-11-29 | 京东方科技集团股份有限公司 | Photograph album management method, storage medium and electronic equipment |
CN110633394A (en) * | 2019-08-28 | 2019-12-31 | 浙江工业大学 | Graph compression method based on feature enhancement |
WO2020001648A1 (en) * | 2018-06-29 | 2020-01-02 | 华为技术有限公司 | Image processing method and apparatus and terminal device |
CN110913141A (en) * | 2019-11-29 | 2020-03-24 | 维沃移动通信有限公司 | Video display method, electronic device and medium |
WO2020063042A1 (en) * | 2018-09-26 | 2020-04-02 | Oppo广东移动通信有限公司 | Picture classification method and apparatus, and computer-readable storage medium and electronic device |
WO2020082724A1 (en) * | 2018-10-26 | 2020-04-30 | 华为技术有限公司 | Method and apparatus for object classification |
CN111353064A (en) * | 2020-02-28 | 2020-06-30 | 北京百度网讯科技有限公司 | Expression package generation method, device, equipment and medium |
CN112204545A (en) * | 2018-06-01 | 2021-01-08 | 富士胶片株式会社 | Image processing device, image processing method, image processing program, and recording medium storing the program |
CN112560992A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Method and device for optimizing image classification model, electronic equipment and storage medium |
CN112740651A (en) * | 2018-09-26 | 2021-04-30 | 高通股份有限公司 | Zoomed in region of interest |
CN113282780A (en) * | 2021-04-28 | 2021-08-20 | 维沃移动通信有限公司 | Picture management method and device, electronic equipment and readable storage medium |
CN113361511A (en) * | 2020-03-05 | 2021-09-07 | 顺丰科技有限公司 | Method, device and equipment for establishing correction model and computer readable storage medium |
TWI767288B (en) * | 2020-04-24 | 2022-06-11 | 英華達股份有限公司 | Group-sharing image-capturing method |
WO2022155818A1 (en) * | 2021-01-20 | 2022-07-28 | 京东方科技集团股份有限公司 | Image encoding method and device, image decoding method and device, and codec |
US20220284696A1 (en) * | 2019-07-10 | 2022-09-08 | Toyota Motor Europe | System and method for training a model to perform semantic segmentation on low visibility images using high visibility images having a close camera view |
CN116309494A (en) * | 2023-03-23 | 2023-06-23 | 宁波斯年智驾科技有限公司 | Method, device, equipment and medium for determining interest point information in electronic map |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10769543B2 (en) | 2018-08-01 | 2020-09-08 | Forcepoint Llc | Double-layered image classification endpoint solution |
KR102661185B1 (en) | 2018-10-18 | 2024-04-29 | 삼성전자주식회사 | Electronic device and method for obtaining images |
KR102166841B1 (en) * | 2018-11-26 | 2020-10-16 | 한국전자기술연구원 | System and method for improving image quality |
KR20200085611A (en) * | 2019-01-07 | 2020-07-15 | 삼성전자주식회사 | Electronic apparatus and controlling method thereof |
US11695726B2 (en) | 2019-01-24 | 2023-07-04 | Huawei Technologies Co., Ltd. | Image sharing method and mobile device |
CN110968786B (en) * | 2019-11-29 | 2023-10-17 | 百度在线网络技术(北京)有限公司 | Visual information recommendation method, device, equipment and storage medium |
KR20220149803A (en) * | 2021-04-23 | 2022-11-08 | 삼성전자주식회사 | Electronic device and method for sharing information |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100228751A1 (en) * | 2009-03-09 | 2010-09-09 | Electronics And Telecommunications Research Institute | Method and system for retrieving ucc image based on region of interest |
KR20110028811A (en) * | 2009-09-14 | 2011-03-22 | 엘지전자 주식회사 | Mobile terminal and information providing method thereof |
CN102576372A (en) * | 2009-11-02 | 2012-07-11 | 微软公司 | Content-based image search |
CN102687140A (en) * | 2009-12-30 | 2012-09-19 | 诺基亚公司 | Methods and apparatuses for facilitating content-based image retrieval |
CN103562911A (en) * | 2011-05-17 | 2014-02-05 | 微软公司 | Gesture-based visual search |
CN103927767A (en) * | 2014-04-18 | 2014-07-16 | 北京智谷睿拓技术服务有限公司 | Image processing method and device |
US20140211065A1 (en) * | 2013-01-30 | 2014-07-31 | Samsung Electronics Co., Ltd. | Method and system for creating a context based camera collage |
US20150005630A1 (en) * | 2013-07-01 | 2015-01-01 | Samsung Electronics Co., Ltd. | Method of sharing information in ultrasound imaging |
-
2016
- 2016-11-16 CN CN201611007300.8A patent/CN107239203A/en active Pending
-
2017
- 2017-11-08 KR KR1020170148051A patent/KR20180055707A/en not_active Application Discontinuation
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100228751A1 (en) * | 2009-03-09 | 2010-09-09 | Electronics And Telecommunications Research Institute | Method and system for retrieving ucc image based on region of interest |
KR20110028811A (en) * | 2009-09-14 | 2011-03-22 | 엘지전자 주식회사 | Mobile terminal and information providing method thereof |
CN102576372A (en) * | 2009-11-02 | 2012-07-11 | 微软公司 | Content-based image search |
CN102687140A (en) * | 2009-12-30 | 2012-09-19 | 诺基亚公司 | Methods and apparatuses for facilitating content-based image retrieval |
CN103562911A (en) * | 2011-05-17 | 2014-02-05 | 微软公司 | Gesture-based visual search |
US20140211065A1 (en) * | 2013-01-30 | 2014-07-31 | Samsung Electronics Co., Ltd. | Method and system for creating a context based camera collage |
US20150005630A1 (en) * | 2013-07-01 | 2015-01-01 | Samsung Electronics Co., Ltd. | Method of sharing information in ultrasound imaging |
CN103927767A (en) * | 2014-04-18 | 2014-07-16 | 北京智谷睿拓技术服务有限公司 | Image processing method and device |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107908337A (en) * | 2017-12-14 | 2018-04-13 | 广州三星通信技术研究有限公司 | Share the method and apparatus of picture material |
CN110020086B (en) * | 2017-12-22 | 2021-10-26 | 中国移动通信集团浙江有限公司 | User portrait query method and device |
CN110020086A (en) * | 2017-12-22 | 2019-07-16 | 中国移动通信集团浙江有限公司 | A kind of user draws a portrait querying method and device |
CN109189880A (en) * | 2017-12-26 | 2019-01-11 | 爱品克科技(武汉)股份有限公司 | A kind of user interest classification method based on short text |
CN109963071B (en) * | 2017-12-26 | 2021-07-27 | 深圳市优必选科技有限公司 | Method, system and terminal equipment for automatically editing images |
CN109963071A (en) * | 2017-12-26 | 2019-07-02 | 深圳市优必选科技有限公司 | A kind of method, system and the terminal device of automatic editing image |
CN108182404A (en) * | 2017-12-28 | 2018-06-19 | 上海传英信息技术有限公司 | A kind of method for picture sharing and photo share system based on intelligent terminal |
CN108230283A (en) * | 2018-01-19 | 2018-06-29 | 维沃移动通信有限公司 | A kind of textures material recommends method and electronic equipment |
CN110209916A (en) * | 2018-02-05 | 2019-09-06 | 高德软件有限公司 | A kind of point of interest image recommendation method and device |
CN110209916B (en) * | 2018-02-05 | 2021-08-20 | 阿里巴巴(中国)有限公司 | Method and device for recommending point of interest images |
CN108494947B (en) * | 2018-02-09 | 2021-01-26 | 维沃移动通信有限公司 | Image sharing method and mobile terminal |
CN108494947A (en) * | 2018-02-09 | 2018-09-04 | 维沃移动通信有限公司 | A kind of images share method and mobile terminal |
CN108650524A (en) * | 2018-05-23 | 2018-10-12 | 腾讯科技(深圳)有限公司 | Video cover generation method, device, computer equipment and storage medium |
CN108650524B (en) * | 2018-05-23 | 2022-08-16 | 腾讯科技(深圳)有限公司 | Video cover generation method and device, computer equipment and storage medium |
CN108805867A (en) * | 2018-05-25 | 2018-11-13 | 北京百度网讯科技有限公司 | Method and apparatus for exporting tobacco leaf degree information |
CN112204545A (en) * | 2018-06-01 | 2021-01-08 | 富士胶片株式会社 | Image processing device, image processing method, image processing program, and recording medium storing the program |
WO2020001648A1 (en) * | 2018-06-29 | 2020-01-02 | 华为技术有限公司 | Image processing method and apparatus and terminal device |
CN110727808A (en) * | 2018-06-29 | 2020-01-24 | 华为技术有限公司 | Image processing method and device and terminal equipment |
CN112740651A (en) * | 2018-09-26 | 2021-04-30 | 高通股份有限公司 | Zoomed in region of interest |
WO2020063042A1 (en) * | 2018-09-26 | 2020-04-02 | Oppo广东移动通信有限公司 | Picture classification method and apparatus, and computer-readable storage medium and electronic device |
CN109410163A (en) * | 2018-10-23 | 2019-03-01 | 北京旷视科技有限公司 | Recommended location of taking pictures acquisition methods, device, terminal and computer storage medium |
WO2020082724A1 (en) * | 2018-10-26 | 2020-04-30 | 华为技术有限公司 | Method and apparatus for object classification |
CN111104954A (en) * | 2018-10-26 | 2020-05-05 | 华为技术有限公司 | Object classification method and device |
CN111104954B (en) * | 2018-10-26 | 2023-11-14 | 华为云计算技术有限公司 | Object classification method and device |
CN109432779B (en) * | 2018-11-08 | 2022-05-17 | 北京旷视科技有限公司 | Difficulty adjusting method and device, electronic equipment and computer readable storage medium |
CN109432779A (en) * | 2018-11-08 | 2019-03-08 | 北京旷视科技有限公司 | Adjusting of difficulty method, apparatus, electronic equipment and computer readable storage medium |
CN110070107A (en) * | 2019-03-26 | 2019-07-30 | 华为技术有限公司 | Object identification method and device |
CN109992568A (en) * | 2019-03-31 | 2019-07-09 | 联想(北京)有限公司 | A kind of information processing method and device |
CN110012341A (en) * | 2019-04-17 | 2019-07-12 | 北京华宇信息技术有限公司 | Video evidence methods of exhibiting shows device and electronic equipment |
CN110045892A (en) * | 2019-04-19 | 2019-07-23 | 维沃移动通信有限公司 | Display methods and terminal device |
CN110045892B (en) * | 2019-04-19 | 2021-04-02 | 维沃移动通信有限公司 | Display method and terminal equipment |
US20220284696A1 (en) * | 2019-07-10 | 2022-09-08 | Toyota Motor Europe | System and method for training a model to perform semantic segmentation on low visibility images using high visibility images having a close camera view |
CN110633394B (en) * | 2019-08-28 | 2021-10-15 | 浙江工业大学 | Graph compression method based on feature enhancement |
CN110633394A (en) * | 2019-08-28 | 2019-12-31 | 浙江工业大学 | Graph compression method based on feature enhancement |
CN110516083A (en) * | 2019-08-30 | 2019-11-29 | 京东方科技集团股份有限公司 | Photograph album management method, storage medium and electronic equipment |
CN110913141A (en) * | 2019-11-29 | 2020-03-24 | 维沃移动通信有限公司 | Video display method, electronic device and medium |
CN111353064A (en) * | 2020-02-28 | 2020-06-30 | 北京百度网讯科技有限公司 | Expression package generation method, device, equipment and medium |
CN111353064B (en) * | 2020-02-28 | 2023-06-13 | 北京百度网讯科技有限公司 | Expression package generation method, device, equipment and medium |
CN113361511A (en) * | 2020-03-05 | 2021-09-07 | 顺丰科技有限公司 | Method, device and equipment for establishing correction model and computer readable storage medium |
TWI767288B (en) * | 2020-04-24 | 2022-06-11 | 英華達股份有限公司 | Group-sharing image-capturing method |
CN112560992B (en) * | 2020-12-25 | 2023-09-01 | 北京百度网讯科技有限公司 | Method, device, electronic equipment and storage medium for optimizing picture classification model |
CN112560992A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Method and device for optimizing image classification model, electronic equipment and storage medium |
WO2022155818A1 (en) * | 2021-01-20 | 2022-07-28 | 京东方科技集团股份有限公司 | Image encoding method and device, image decoding method and device, and codec |
WO2022228373A1 (en) * | 2021-04-28 | 2022-11-03 | 维沃移动通信有限公司 | Image management method and apparatus, electronic device, and readable storage medium |
CN113282780A (en) * | 2021-04-28 | 2021-08-20 | 维沃移动通信有限公司 | Picture management method and device, electronic equipment and readable storage medium |
CN116309494A (en) * | 2023-03-23 | 2023-06-23 | 宁波斯年智驾科技有限公司 | Method, device, equipment and medium for determining interest point information in electronic map |
CN116309494B (en) * | 2023-03-23 | 2024-01-23 | 宁波斯年智驾科技有限公司 | Method, device, equipment and medium for determining interest point information in electronic map |
Also Published As
Publication number | Publication date |
---|---|
KR20180055707A (en) | 2018-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107239203A (en) | A kind of image management method and device | |
US20180137119A1 (en) | Image management method and apparatus thereof | |
Lokoč et al. | Interactive search or sequential browsing? A detailed analysis of the video browser showdown 2018 | |
EP2457183B1 (en) | System and method for tagging multiple digital images | |
CN104063683B (en) | Expression input method and device based on face identification | |
JP5830784B2 (en) | Interest graph collection system by relevance search with image recognition system | |
CN103069415B (en) | Computer-implemented method, computer program and computer system for image procossing | |
US20070288453A1 (en) | System and Method for Searching Multimedia using Exemplar Images | |
CN108230262A (en) | Image processing method, image processing apparatus and storage medium | |
CN110020185A (en) | Intelligent search method, terminal and server | |
CN107408212A (en) | System and method for identifying the unwanted photo being stored in equipment | |
CN107273106A (en) | Object information is translated and derivation information acquisition methods and device | |
US9137574B2 (en) | Method or system to predict media content preferences | |
JP2009251850A (en) | Commodity recommendation system using similar image search | |
CN110263746A (en) | Visual search based on posture | |
JP2011154687A (en) | Method and apparatus for navigating image data set, and program | |
CN110263180A (en) | It is intended to knowledge mapping generation method, intension recognizing method and device | |
CN103186538A (en) | Image classification method, image classification device, image retrieval method and image retrieval device | |
CN113766296B (en) | Live broadcast picture display method and device | |
Ahanger et al. | Video query formulation | |
CN101334780A (en) | Figure image searching method, system and recording media for storing image metadata | |
CN107861970A (en) | A kind of commodity picture searching method and device | |
CN112613548B (en) | User customized target detection method, system and storage medium based on weak supervised learning | |
Berg et al. | It's all about the data | |
JP6787831B2 (en) | Target detection device, detection model generation device, program and method that can be learned by search results |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171010 |