CN103353879B - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
CN103353879B
CN103353879B CN201310240246.1A CN201310240246A CN103353879B CN 103353879 B CN103353879 B CN 103353879B CN 201310240246 A CN201310240246 A CN 201310240246A CN 103353879 B CN103353879 B CN 103353879B
Authority
CN
China
Prior art keywords
image
target image
user
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310240246.1A
Other languages
Chinese (zh)
Other versions
CN103353879A (en
Inventor
谢西庭
杜琳
于魁飞
潘磊
黄伟才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201310240246.1A priority Critical patent/CN103353879B/en
Publication of CN103353879A publication Critical patent/CN103353879A/en
Application granted granted Critical
Publication of CN103353879B publication Critical patent/CN103353879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an image processing method and device. The method comprises the following steps: receiving an acquisition instruction of a user for a target image; collecting metadata of the target image and/or personal information of the user; and judging whether the related image of the target image exists or not, and marking a specific area on the target image according to a judgment result. The method and the device can mark the specific area on the target image according to the image characteristics and/or the user characteristics, and can effectively improve the user experience on the basis of providing conditions for the rapid presentation of the image.

Description

Image processing method and apparatus
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing multiple images.
Background
When a user browses pictures on a terminal device, a certain waiting time may be required for the presentation of complete and clear pictures, and particularly for pictures from a network, if the network speed is relatively slow compared with the size of the pictures, the presentation of the pictures after the required waiting time will cause adverse effects on user experience. In order to improve the user experience, a processing manner of transmitting while displaying, i.e. from a part to a whole display, is usually adopted to gradually present all of the pictures:
displaying a part of pictures at a time according to the sequence from top to bottom until the pictures are all displayed;
according to the mode from blurring to clearness, firstly transmitting a part of data of the picture to equipment, calculating the pixel value of the whole picture according to the data blurring of the part of data and displaying the pixel value, then continuously transmitting other parts of data of the picture, and gradually replacing the pixel value calculated according to the part of data with the actual pixel value until the whole picture is displayed.
Although the above two processing modes from partial to whole can improve the user experience to a certain extent, the improvement of the experience is limited for different pictures and different users within the waiting time of complete and clear picture presentation.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the image processing method and the image processing device can effectively improve user experience on the basis of providing conditions for rapid image presentation.
In order to solve the above technical problem, in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
receiving an acquisition instruction of a user for a target image;
collecting metadata of the target image and/or personal information of the user;
and judging whether the related image of the target image exists or not, and marking a specific area on the target image according to a judgment result.
With reference to the first aspect, in a first possible implementation manner, whether a reference image is a related image of the target image is determined according to a correlation degree of metadata of the target image and the reference image.
With reference to the first aspect, in a second possible implementation, the method further includes the steps of:
and acquiring the social relationship information of the user according to the personal information of the user.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, whether a reference image is a related image of the target image is determined according to the social relationship information of the user.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner, it is determined whether a reference image is a related image of the target image according to the social relationship information of the user and the relevance of the metadata of the target image and the reference image.
With reference to the first aspect, in a fifth possible implementation manner, in the step of determining whether there is a related image of a target image, and marking a specific area on the target image according to a determination result:
and setting and marking a specific area when the related image of the target image does not exist.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner, the specific region is a region of interest of the user on the target image.
With reference to the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner, the specific area is set according to the following steps:
displaying the target image;
and acquiring an interested area of the target image of the user, and setting the interested area as a specific area.
With reference to the first aspect, in an eighth possible implementation manner, in the step of determining whether there is a related image of a target image, and marking a specific area on the target image according to a determination result:
when a related image of the target image exists, marking a region on the target image corresponding to a specific region on the related image.
With reference to the eighth possible implementation manner of the first aspect, in a ninth possible implementation manner, the method further includes the step of:
preferentially displaying a specific region of the target image.
With reference to the ninth possible implementation manner of the first aspect, in a tenth possible implementation manner, the method further includes the steps of:
and displaying the area of the target image except the specific area.
With reference to the first aspect or any one of the foregoing possible implementations of the first aspect, in an eleventh possible implementation, the personal information includes: the name, sex, age, occupation, nationality and/or biological characteristics of the user.
With reference to any one of the second to fourth possible implementation manners of the first aspect, in a twelfth possible implementation manner, the social relationship information includes: the information processing method comprises the following steps of obtaining one or more of parent information, friend information, colleague information, historical behavior association information, address book information and social application information of a user.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the receiving module is used for receiving an acquisition instruction of a user for a target image;
the acquisition module is used for acquiring metadata of the target image and/or personal information of the user;
and the marking module is used for judging whether the related image of the target image exists or not and marking the specific area on the target image according to the judgment result.
With reference to the second aspect, in a first possible implementation manner, the marking module determines whether the reference image is a related image of the target image according to a correlation degree of metadata of the target image and the reference image.
With reference to the second aspect, in a second possible implementation manner, the acquisition module is further configured to acquire social relationship information of the user according to the personal information of the user.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner, the tagging module determines whether a reference image is a related image of the target image according to the social relationship information of the user.
With reference to the second possible implementation manner of the second aspect, in a fourth possible implementation manner, the tagging module determines whether the reference image is a related image of the target image according to the social relationship information of the user and the relevance of the metadata of the target image and the reference image.
With reference to the second aspect, in a fifth possible implementation manner, the marking module sets and marks a specific area when there is no related image of the target image.
With reference to the fifth possible implementation manner of the second aspect, in a sixth possible implementation manner, the specific region is a region of interest of the user on the target image.
With reference to the sixth possible implementation manner of the second aspect, in a seventh possible implementation manner, the apparatus further includes:
the display module is used for displaying the target image;
the acquisition module is further used for acquiring an interested area of the target image of the user and setting the interested area as a specific area.
With reference to the second aspect, in an eighth possible implementation manner, the marking module marks a region on the target image corresponding to a specific region on the related image when the related image of the target image exists.
With reference to the eighth possible implementation manner of the second aspect, in a ninth possible implementation manner, the apparatus further includes:
and the display module is used for preferentially displaying the specific area of the target image.
With reference to the ninth possible implementation manner of the second aspect, in a tenth possible implementation manner, the method display module is further configured to display an area of the target image except for the specific area.
With reference to the second aspect or any one of the above possible embodiments of the second aspect, in an eleventh possible embodiment, the personal information includes: the user name, gender, age, occupation, nationality, biological characteristics, address book information and social application information.
With reference to any one of the second to fourth possible implementation manners of the second aspect, in a twelfth possible implementation manner, the social relationship information includes: and one or more of parent information, friend information, colleague information and historical behavior association information of the user.
The method and the device can mark the specific area on the target image according to the image characteristics and/or the user characteristics, and can effectively improve the user experience on the basis of providing conditions for the rapid presentation of the image.
Drawings
FIG. 1 is a flow chart of an image processing method of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a configuration of an image processing apparatus according to an embodiment of the present invention;
FIG. 3 is a flow chart of one example of an image processing method of an embodiment of the present invention;
FIG. 4 is a flow chart of another example of an image processing method of an embodiment of the present invention;
fig. 5 is another configuration diagram of the image processing apparatus according to the embodiment of the present invention.
Detailed Description
The following describes the embodiments of the present invention in further detail with reference to the drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Aiming at the fact that different images are different in the focused part of different users, the method and the device provided by the embodiment of the invention mark the specific area of the image based on the image characteristics and/or the user characteristics, so that the user experience is more effectively improved on the basis of providing conditions for the rapid presentation of the image.
As shown in fig. 1, the image processing method according to the embodiment of the present invention includes the steps of:
s101, receiving an acquisition instruction of a user for a target image.
The user's acquisition of the target image generally refers to browsing and viewing of the target image by the user, and the acquisition instruction refers to an operation that the user wants to browse the target image, and includes instructions in input forms such as voice, touch and the like, for example, clicking a thumbnail of a picture or a file icon.
And S102, collecting metadata of the target image and/or personal information of a user.
S103, judging whether the related image of the target image exists according to the metadata of the target image and/or the personal information of the user, and marking a specific area on the target image according to the judgment result.
The target image is a target to be browsed and viewed by a user, and in each embodiment of the invention, a concept of a reference image is further introduced, the reference image is any image marked with a specific area, a related image of the target image is a reference image having a certain correlation with the target image, and the reference image and the marked related information can be stored locally or remotely. In the method of the embodiment of the present invention, whether the reference image is a related image of the target image may be determined according to the relationship between the target image and the metadata of all the reference images, whether the reference image is a related image of the target image may also be determined according to the social relationship information of the user who desires to acquire the target image and the social relationship information associated with the reference image, and of course, whether the reference image is a related image of the target image may also be determined by combining the metadata and the social relationship information of the user. The social relationship information of the user is obtained according to the personal information of the user, and the social relationship information can be from the local or from the network, and the like.
In the method of the embodiment of the present invention, the specific area may be a salient feature (most capable of representing image content) area of the target image or an area having a special meaning (most capable of arousing user interest) to the user. For example, for a picture of a subject being a human object, the particular region may be a human face; for landscape pictures, the particular area may be a building or the like. Preferably, in the method according to the embodiment of the present invention, the specific region is a region of interest of the target image by the user.
According to the method provided by the embodiment of the invention, the related images of the target image are searched and marked according to the metadata of the target image and/or the social relationship information of the user. When marking an image, the marked specific area is associated with image characteristics and/or user characteristics and then stored locally and/or remotely.
In summary, the method of the embodiment of the invention can mark the specific area on the target image according to the image feature and/or the user feature, so as to realize more effective improvement of user experience on the basis of providing conditions for rapid presentation of the image.
Specifically, in step S103, the following three ways are included to determine whether there is a related image of the target image:
comparing metadata of a target image and a reference image, and judging whether the reference image is a related image of the target image according to the correlation degree of the metadata of the target image and the reference image. The degree of correlation refers to the degree of similarity between the metadata of the target image and the metadata of the reference image, and when the degree of correlation exceeds a certain threshold (e.g., 70%, 80%, 90%, or even 100%), the reference image is determined to be a correlated image of the target image, which includes a case where the target image and the reference image are actually the same image.
In embodiments of the present invention, metadata (metadata) refers to data that describes characteristics of data. For example, the metadata may be information of a brand and model of a camera taking a picture, an author of the picture, a photographing time, a resolution, a picture size, an aperture size, a shutter speed, an exposure mode, sensitivity, a focal length, a focus mode, or a photometry mode. The metadata may also include environmental characteristics, i.e., contextual information data, when capturing the data described by the metadata. Data that can be retrieved from relevant sensors or information, such as ambient temperature, air quality, time, location, etc. at the time of the shot; the names, professions, social relationships of people, names of buildings, undertaking units, units in use, names of works of art, authors, etc. of the people are obtained by image recognition and network data retrieval. For example, if a GPS accessory is used, the position information such as the longitude and latitude of the shooting location, the shooting direction, or the altitude is also included; if the multimedia information is processed by using the post-stage software, the name of the software is also included.
And secondly, judging whether the reference image is a related image of the target image or not by comparing the social relationship information of the user who needs to acquire the target image with the social relationship information of the user related to the reference image when marking the reference image. For example, when the social relationship information of the user who wants to acquire the target image has a certain correlation with the social relationship of the user who marks the reference image (the user who marks the reference image is a person who has a certain relationship with the user who wants to acquire the target image), it is determined that the reference image is a correlated image of the target image.
The social relationship information of the user is acquired from the local and/or remote side according to the personal information of the user, and the information of a group which has some relevance with the user. The personal information may include: the name, sex, age, occupation, nationality, ethnicity, biometric information (fingerprint, vein, palm, retina, iris, human body odor, facial shape, even blood vessel, DNA, bone, etc.), etc. of the user. The social relationship information includes: the information of the friends of the user, the friend information, the co-worker information, the historical behavior association information (for example, the user mentions others in posts shared on the network), the friend circle in the social network (for example, QQ friends, WeChat friends, etc.), and the like.
And thirdly, comparing the metadata of the target image and the reference image and comparing the social relationship information of the user who needs to acquire the target image and the user associated when marking the reference image.
Based on the above determination, in step S103, if there is no image related to the target image, the specific area is set and marked. The specific area may be set according to a certain rule, for example, according to the position of the image (for example, the central area of the image is the specific area); setting according to a color histogram of the picture (for example, setting a pixel region with a color proportion exceeding a set threshold value in the image as a specific region); or according to the user's preference. Specifically, when the specific area is the area of interest of the user on the target image, the specific area is set according to the following steps:
displaying the target image;
and acquiring an interested area of the target image of the user, and setting the interested area as a specific area. It should be noted that any technical means known to those skilled in the art may be adopted to capture the region of interest of the user on the target image, for example, the region of interest of the user on the target image is determined according to the detected movements of the head and the eyeball of the user.
In step S103, if there is a related image of the target image, a region on the target image corresponding to a specific region on the related image is marked.
In summary, the method of the embodiment of the present invention finds out the related image according to the image feature and/or the user feature, and marks the reference region of the target image by referring to the specific region marked on the related image, so that the region in which the user is most interested can be accurately and quickly presented in a priority manner in the subsequent presentation, thereby more effectively improving the user experience.
That is, when there is a related image of a target image, after marking a region on the target image corresponding to a specific region on the related image, the method of the embodiment of the present invention further includes the steps of:
preferentially displaying a specific region of the target image.
After the specific area is displayed, the area of the target image except the specific area can be continuously displayed according to the requirement of a user, and the target image is completely and clearly presented.
As shown in fig. 2, an image processing apparatus 200 provided for an embodiment of the present invention includes:
the receiving module 201 is configured to receive an acquisition instruction of a target image from a user.
The user's acquisition of the target image generally refers to browsing and viewing of the target image by the user, and the acquisition instruction refers to an operation that the user wants to browse the target image, and includes instructions in input forms such as voice, touch and the like, for example, clicking a thumbnail of a picture or a file icon.
And the acquisition module 202 is used for acquiring the metadata of the target image and/or the personal information of the user.
The marking module 203 is configured to determine whether an image related to the target image exists according to metadata of the target image and/or personal information of a user, and mark a specific area on the target image according to a determination result.
The target image is a target to be browsed and viewed by a user, and in the embodiments of the present invention, a concept of a reference image is further introduced, where the reference image is an arbitrary image marked with a specific area, a related image of the target image is a reference image having a certain correlation with the target image, and the reference image and the marked related information may be stored locally or remotely on the device. In the device according to the embodiment of the present invention, whether the reference image is a related image of the target image may be determined according to the relationship between the target image and the metadata of all the reference images, whether the reference image is a related image of the target image may also be determined according to the social relationship information of the user who desires to acquire the target image and the relationship between the reference image and the social relationship information of the user, and of course, whether the reference image is a related image of the target image may also be determined by combining the metadata and the social relationship information of the user. The social relationship information is obtained by the collecting module 202 through personal information of the user, and the social relationship information may be from local or from a network, and the like.
In the device of the embodiment of the present invention, the specific region may be a region of a target image with a significant feature (capable of expressing image content most) or a region having a special meaning to the user (capable of arousing user interest most). For example, for a picture of a subject being a human object, the particular region may be a human face; for landscape pictures, the particular area may be a building or the like. Preferably, in the method according to the embodiment of the present invention, the specific region is a region of interest of the target image by the user.
In the device provided by the embodiment of the invention, the related image of the target image is searched and marked according to the metadata of the target image and/or the social relationship information of the user. When marking an image, the marked specific area is associated with image characteristics and/or user characteristics and then stored locally and/or remotely.
In summary, the device according to the embodiment of the present invention can mark the specific area on the target image according to the image feature and/or the user feature, so as to achieve more effective improvement of the user experience on the basis of providing conditions for rapid presentation of the image.
Specifically, the marking module 203 determines whether there is a related image of the target image by the following three ways:
comparing metadata of a target image and a reference image, and judging whether the reference image is a related image of the target image according to the correlation degree of the metadata of the target image and the reference image. The degree of correlation refers to the degree of similarity between the metadata of the target image and the metadata of the reference image, and when the degree of correlation exceeds a certain threshold (e.g., 70%, 80%, 90%, or even 100%), the reference image is determined to be a correlated image of the target image, which includes a case where the target image and the reference image are actually the same image.
And secondly, judging whether the reference image is a related image of the target image or not by comparing the social relationship information of the user who needs to acquire the target image with the social relationship information of the user related to the reference image when marking the reference image. For example, when the social relationship information of the user who wants to acquire the target image has a certain correlation with the social relationship of the user who marks the reference image (the user who marks the reference image is a person who has a certain relationship with the user who wants to acquire the target image), it is determined that the reference image is a correlated image of the target image.
The social relationship information of the user is acquired from the local and/or remote side according to the personal information of the user, and the information of a group which has some relevance with the user. The personal information may include: the name, sex, age, occupation, nationality, ethnicity, biometric information (fingerprint, vein, palm, retina, iris, human body odor, facial shape, even blood vessel, DNA, bone, etc.), etc. of the user. The social relationship information includes: the information of the friends of the user, the friend information, the co-worker information, the historical behavior association information (for example, the user mentions others in posts shared on the network), the friend circle in the social network (for example, QQ friends, WeChat friends, etc.), and the like.
And thirdly, comparing the metadata of the target image and the reference image and comparing the social relationship information of the user who needs to acquire the target image and the user associated when marking the reference image.
According to the above determination, the marking module 203 sets and marks the specific area when there is no related image of the target image. The specific area may be set according to a certain rule, for example, according to the position of the image (for example, the central area of the image is the specific area); setting according to a color histogram of the picture (for example, setting a pixel region with a color proportion exceeding a set threshold value in the image as a specific region); or according to the user's preference.
Specifically, when the specific region is a region of interest of the user on the target image, the apparatus 200 of the embodiment of the present invention further includes:
a display module 204, configured to display the target image;
the acquisition module 202 is further configured to acquire a region of interest of the target image by the user, and set the region of interest as a specific region. It should be noted that the acquisition module 202 may employ any technical means known to those skilled in the art to acquire the region of interest of the user on the target image, for example, the region of interest of the user on the target image is determined according to the detected movements of the head and eyes of the user.
The marking module 203 marks a region on the target image corresponding to a specific region on the related image when the related image of the target image exists.
In summary, the device of the embodiment of the present invention finds out the related image according to the image feature and/or the user feature, and marks the reference region of the target image by referring to the specific region marked on the related image, so that the region in which the user is most interested can be accurately and quickly presented in a priority manner in the subsequent presentation, thereby more effectively improving the user experience.
That is, when there is a related image of the target image, after marking the area on the target image corresponding to the specific area on the related image, the display module 204 of the device according to the embodiment of the present invention is further configured to preferentially display the specific area of the target image, and further, after displaying the specific area, continue to display the area of the target image except the specific area according to the needs of the user, thereby achieving complete and clear presentation of the target image.
The methods and apparatus of embodiments of the present invention are further illustrated by the following specific examples.
When a user browses a target picture on the device of the embodiment of the invention or on a terminal device comprising the device of the embodiment of the invention:
in this example, it is assumed that a reference picture that is a related image of a target picture is stored in the cloud server, and the reference picture and the target picture are continuously shot pictures, and it can be determined from metadata of the reference picture and the target picture that a correlation degree is high: most shooting parameters of the two are the same, and the shooting time meets the continuity of continuous shooting pictures. As shown in fig. 3, the image processing according to the method of the embodiment of the present invention is as follows:
s301, receiving an acquisition instruction input by a user through clicking the target picture.
S302, collecting metadata of the target picture and personal information of the user.
S303, judging whether each reference picture is a related image of the target picture according to the similarity of the metadata of the target picture and the metadata of each reference picture;
s304, if the relevant image of the target picture does not exist, executing the step S305, otherwise, executing the step S308;
s305, displaying the target picture;
s306, collecting the region of interest of the user on the target picture;
s307, setting the region of interest as a specific region and marking, storing the marking information, the metadata of the target image and the personal information of the user in a correlated manner, storing the marking information in the local and/or transmitting the marking information to a cloud server for storing, and ending the processing flow;
s308, marking an area on the target picture corresponding to the marked specific area on the reference picture as the related picture;
s309, preferentially displaying the specific area, then displaying other areas, and ending the processing flow.
In this example, it is assumed that the cloud server stores a reference picture that is a related image of the target picture, and it can be determined from the social relationship information of the user that the user associated with the reference picture is a person having a certain specific relationship with the user who wants to browse the target picture, and the occupation, the historical behavior, and the like of the two are highly related. As shown in fig. 4, the image processing according to the method of the embodiment of the present invention may further include the following steps:
s401, receiving an instruction which is input by a user through voice and used for browsing the target picture.
S402, collecting metadata of the target picture and personal information of the user.
S403, acquiring social relationship information of the user according to personal information of the user;
s404, judging whether each reference picture is a related image of the target picture according to the social relation information of the user;
s405, if the relevant image of the target picture does not exist, executing the step S405, otherwise, executing the step S408;
s406, displaying the target picture;
s407, collecting an interested area of the target picture of the user;
s408, setting the region of interest as a specific region and marking, storing the marking information, the metadata of the target image and the personal information of the user in a correlated manner, storing the marking information in a local and/or transmitting the marking information to a cloud server for storing, and ending the processing;
s409, marking an area on the target picture corresponding to the marked specific area on the reference picture as the related picture;
and S410, preferentially displaying the specific area, then displaying other areas, and ending the processing flow.
Referring to fig. 5, the present invention further provides an image processing apparatus 500, and the specific embodiment of the present invention does not limit the specific implementation of the image processing apparatus 500. As shown in fig. 5, the apparatus may include:
a processor (processor)510, a Communications Interface 520, a memory 530, and a communication bus 540. Wherein:
processor 510, communication interface 520, and memory 530 communicate with one another via a communication bus 540.
A communication interface 520 for communicating with network elements such as clients and the like.
The processor 510 is configured to execute the program 532, and may specifically perform the relevant steps in the method embodiment shown in fig. 1.
In particular, program 432 may include program code comprising computer operating instructions.
Processor 510 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention.
A memory 530 for storing a program 532. Memory 530 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. The program 532 may specifically include:
the receiving module is used for receiving an acquisition instruction of a user for a target image;
the acquisition module is used for acquiring metadata of the target image and/or personal information of the user;
and the marking module is used for judging whether the related image of the target image exists or not and marking the specific area on the target image according to the judgment result.
The specific implementation of each unit in the program 532 can refer to a corresponding unit in the embodiment shown in fig. 2, which is not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only for illustrating the invention and are not to be construed as limiting the invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention, therefore, all equivalent technical solutions also belong to the scope of the invention, and the scope of the invention is defined by the claims.

Claims (22)

1. An image processing method, characterized in that the method comprises the steps of:
receiving an acquisition instruction of a user for a target image;
collecting metadata of the target image and/or personal information of the user;
judging whether a related image of the target image exists or not according to the metadata of the target image and/or the personal information of the user, and marking a specific area on the target image according to a judgment result;
when the judgment result is that the related image of the target image exists, marking a region corresponding to a specific region on the related image on the target image;
preferentially displaying a specific region of the target image.
2. The method according to claim 1, wherein whether the reference image is a related image of the target image is determined according to a degree of correlation of metadata of the target image and the reference image.
3. The method according to claim 1, characterized in that the method further comprises the step of:
and acquiring the social relationship information of the user according to the personal information of the user.
4. The method of claim 3, wherein whether the reference image is a related image of the target image is determined according to the social relationship information of the user.
5. The method according to claim 3, wherein whether the reference image is a related image of the target image is determined according to the social relationship information of the user and the correlation degree of the metadata of the target image and the reference image.
6. The method according to claim 1, wherein in the step of determining whether there is a related image of the target image and marking a specific area on the target image according to the determination result:
and setting and marking a specific area when the related image of the target image does not exist.
7. The method of claim 6, wherein the specific region is a region of interest of the user on the target image.
8. The method of claim 7, wherein the specific area is set according to the steps of:
displaying the target image;
and acquiring an interested area of the target image of the user, and setting the interested area as a specific area.
9. The method according to claim 1, characterized in that the method further comprises the step of:
and displaying the area of the target image except the specific area.
10. The method of any one of claims 1-9, wherein the personal information comprises: the name, sex, age, occupation, nationality and/or biological characteristics of the user.
11. The method according to any one of claims 3-5, wherein the social relationship information comprises: the information processing method comprises the following steps of obtaining one or more of parent information, friend information, colleague information, historical behavior association information, address book information and social application information of a user.
12. An image processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving an acquisition instruction of a user for a target image;
the acquisition module is used for acquiring metadata of the target image and/or personal information of the user;
the marking module is used for judging whether a related image of the target image exists or not according to the metadata of the target image and/or the personal information of the user and marking a specific area on the target image according to a judgment result;
the marking module marks a region on the target image corresponding to a specific region on the related image when the related image of the target image exists;
and the display module is used for preferentially displaying the specific area of the target image.
13. The apparatus of claim 12, wherein the marking module determines whether the reference image is a related image of the target image according to a degree of correlation of the metadata of the target image and the reference image.
14. The device of claim 12, wherein the collecting module is further configured to obtain social relationship information of the user according to personal information of the user.
15. The device of claim 14, wherein the tagging module determines whether a reference image is a related image to the target image according to social relationship information of the user.
16. The device of claim 14, wherein the tagging module determines whether the reference image is a related image of the target image according to social relationship information of the user and a correlation degree of metadata of the target image and the reference image.
17. The apparatus of claim 12, wherein the marking module sets and marks a specific area when there is no related image of the target image.
18. The apparatus of claim 17, wherein the specific region is a region of interest of the user on the target image.
19. The apparatus of claim 18,
the display module is used for displaying the target image;
the acquisition module is further used for acquiring an interested area of the target image of the user and setting the interested area as a specific area.
20. The apparatus of claim 12, wherein the display module is further configured to display an area of the target image other than the specific area.
21. The apparatus according to any of claims 12-20, wherein the personal information comprises: the name, sex, age, occupation, nationality and/or biological characteristics of the user.
22. The device according to any of claims 14-16, wherein the social relationship information comprises: the information processing method comprises the following steps of obtaining one or more of parent information, friend information, colleague information, historical behavior association information, address book information and social application information of a user.
CN201310240246.1A 2013-06-18 2013-06-18 Image processing method and apparatus Active CN103353879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310240246.1A CN103353879B (en) 2013-06-18 2013-06-18 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310240246.1A CN103353879B (en) 2013-06-18 2013-06-18 Image processing method and apparatus

Publications (2)

Publication Number Publication Date
CN103353879A CN103353879A (en) 2013-10-16
CN103353879B true CN103353879B (en) 2020-06-02

Family

ID=49310252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310240246.1A Active CN103353879B (en) 2013-06-18 2013-06-18 Image processing method and apparatus

Country Status (1)

Country Link
CN (1) CN103353879B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463776A (en) * 2014-10-30 2015-03-25 深圳市金立通信设备有限公司 Image display method
CN104360803A (en) * 2014-10-30 2015-02-18 深圳市金立通信设备有限公司 Terminal
CN105989092A (en) * 2015-02-12 2016-10-05 东芝医疗系统株式会社 Medical image processing equipment, medical image processing method and medical imaging system
CN105611341B (en) * 2015-12-21 2019-02-22 小米科技有限责任公司 A kind of method, apparatus and system for transmitting image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102186067A (en) * 2011-03-31 2011-09-14 深圳超多维光电子有限公司 Image frame transmission method, device, display method and system
CN102202173A (en) * 2010-03-23 2011-09-28 三星电子(中国)研发中心 Photo automatically naming method and device thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112008003959T5 (en) * 2008-07-31 2011-06-01 Hewlett-Packard Development Co., L.P., Houston Perceptual segmentation of images
JP4853510B2 (en) * 2008-11-27 2012-01-11 ソニー株式会社 Information processing apparatus, display control method, and program
US20130129142A1 (en) * 2011-11-17 2013-05-23 Microsoft Corporation Automatic tag generation based on image content
CN103139386A (en) * 2013-02-05 2013-06-05 广东欧珀移动通信有限公司 Photo album sequencing displaying method and mobile phone with function of photo album sequencing displaying

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202173A (en) * 2010-03-23 2011-09-28 三星电子(中国)研发中心 Photo automatically naming method and device thereof
CN102186067A (en) * 2011-03-31 2011-09-14 深圳超多维光电子有限公司 Image frame transmission method, device, display method and system

Also Published As

Publication number Publication date
CN103353879A (en) 2013-10-16

Similar Documents

Publication Publication Date Title
US11973732B2 (en) Messaging system with avatar generation
US11483268B2 (en) Content navigation with automated curation
US9996735B2 (en) Facial recognition
EP3063731B1 (en) Image cache for replacing portions of images
EP3179408B1 (en) Picture processing method and apparatus, computer program and recording medium
US10110868B2 (en) Image processing to determine center of balance in a digital image
CN110084153B (en) Smart camera for automatically sharing pictures
CN108600632B (en) Photographing prompting method, intelligent glasses and computer readable storage medium
US20130243273A1 (en) Image publishing device, image publishing method, image publishing system, and program
TWI586160B (en) Real time object scanning using a mobile phone and cloud-based visual search engine
CN115735229A (en) Updating avatar garments in messaging systems
CN103988202A (en) Image attractiveness based indexing and searching
CN103353879B (en) Image processing method and apparatus
US20220207875A1 (en) Machine learning-based selection of a representative video frame within a messaging application
WO2019171803A1 (en) Image search device, image search method, electronic equipment, and control method
US9202131B2 (en) Information processing apparatus, information processing method, computer program, and image display apparatus
CN113906437A (en) Improved face quality of captured images
US11477397B2 (en) Media content discard notification system
US9942472B2 (en) Method and system for real-time image subjective social contentment maximization
CN112188108A (en) Photographing method, terminal, and computer-readable storage medium
CN111352680A (en) Information recommendation method and device
CN114387157A (en) Image processing method and device and computer readable storage medium
JP5932107B2 (en) Image processing server and imaging apparatus
WO2014100448A1 (en) Collecting and selecting photos
CN116349220A (en) Real-time video editing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant