CN110245250A - Image processing method and relevant apparatus - Google Patents

Image processing method and relevant apparatus Download PDF

Info

Publication number
CN110245250A
CN110245250A CN201910503401.1A CN201910503401A CN110245250A CN 110245250 A CN110245250 A CN 110245250A CN 201910503401 A CN201910503401 A CN 201910503401A CN 110245250 A CN110245250 A CN 110245250A
Authority
CN
China
Prior art keywords
user
target image
image
target
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910503401.1A
Other languages
Chinese (zh)
Inventor
韩世广
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910503401.1A priority Critical patent/CN110245250A/en
Publication of CN110245250A publication Critical patent/CN110245250A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the present application discloses a kind of image processing method and relevant apparatus, applied to electronic equipment, the electronic equipment includes eyeball tracking device, comprising: when detecting that user is browsing the target image in picture library, obtains the first eye tracked information of user;Determine user to the whole interest-degree of the target image according to the first eye tracked information, and when detecting that the whole interest-degree is greater than default interest-degree, determine user for the partial interest degree and the highest target signature image of the partial interest degree of multiple characteristic images of the target image multi-section display;Multiple images in picture library comprising the target signature image are searched, target image set is obtained and merges to user's push target image set.The embodiment of the present application is conducive to improve the retrieval and browse efficiency of image by carrying out eyeball tracking to user.

Description

Image processing method and related device
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to an image processing method and a related apparatus.
Background
With the widespread application of mobile terminals such as smart phones, smart phones can support more and more applications and have more and more powerful functions, and smart phones develop towards diversification and personalization directions and become indispensable electronic products in user life. In the prior art, the retrieval and browsing efficiency of images in a gallery is low, the images are generally retrieved in a thumbnail preview mode, or the images are displayed according to a fixed existing classification standard such as a figure image/a landscape image, the retrieval mode is single, and a user cannot retrieve and browse the images in a self-defining mode.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which are beneficial to improving the efficiency of searching and browsing images by a user.
In a first aspect, an embodiment of the present application provides an electronic device, including an eye tracking apparatus, a memory, and a processor, wherein,
the eyeball tracking device is used for acquiring first eye tracking information and second eye tracking information of a user;
the memory to store the first eye tracking information and the second eye tracking information;
the processor is used for acquiring first eye tracking information of a user when the situation that the user browses a target image in a gallery is detected; the target image processing device is used for determining the overall interest degree of a user on the target image according to the first eye tracking information, and determining the local interest degree of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and the image processing system is used for searching a plurality of images containing the target characteristic image in the image library, obtaining a target image set and pushing the target image set to a user.
In a second aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, and the method includes:
when detecting that a user is browsing a target image in a gallery, acquiring first eye tracking information of the user;
determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree;
and searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user.
In a third aspect, an embodiment of the present application provides an image processing apparatus applied to an electronic device, the image processing apparatus including a processing unit and a communication unit, wherein,
the processing unit is used for acquiring first eye tracking information of the user through the communication unit when the fact that the user is browsing a target image in the gallery is detected; the target image processing device is used for determining the overall interest degree of a user on the target image according to the first eye tracking information, and determining the local interest degree of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and the image processing system is used for searching a plurality of images containing the target characteristic image in the image library, obtaining a target image set and pushing the target image set to a user.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a controller, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the controller, and the program includes instructions for executing the steps in any of the methods of the first aspect of the embodiment of the present application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 4 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a block diagram of functional units of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Electronic devices may include various handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth, having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure, where the electronic device 100 includes: the eyeball tracking device comprises a shell 110, a circuit board 120 arranged in the shell 110 and an eyeball tracking device 130 arranged on the shell 110, wherein a processor 121 and a memory 122 are arranged on the circuit board 120, the memory 122 is connected with the processor 121, and the processor 121 is connected with the eyeball tracking device of the touch display screen; wherein,
the eyeball tracking device 130 is configured to obtain first eye tracking information and second eye tracking information of the user;
the memory 122 is configured to store the first eye tracking information and the second eye tracking information;
the processor 121 is configured to obtain first eye tracking information of a user when it is detected that the user is browsing a target image in a gallery; the target image processing device is used for determining the overall interest degree of a user on the target image according to the first eye tracking information, and determining the local interest degree of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and the image processing system is used for searching a plurality of images containing the target characteristic image in the image library, obtaining a target image set and pushing the target image set to a user.
The eyeball tracking device can acquire characteristic information related to the change, such as extracting the change characteristics through image capture or scanning, and can predict the state and the demand of a user through tracking the change of the eyes in real time and respond to the change, so that the purpose of controlling equipment through the eyes is achieved. The eyeball tracking device mainly comprises an infrared device (such as an infrared sensor) and an image acquisition device (such as a camera). When a user needs to use the eyeball tracking function of the electronic equipment, the eyeball tracking function needs to be started firstly, namely the eyeball tracking device is in an available state at the moment, the user can be guided to correct the eyeball tracking function firstly after the eyeball tracking function is started, the fixation point position of the user on a screen can be calculated after the geometric characteristics and the motion characteristics of the eyeball of the user are collected in the correction process, and then whether the fixation point position of the user is the position for guiding the user to fix or not is determined, so that the correction process is completed.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method applied to an electronic device according to an embodiment of the present disclosure. As shown in the figure, the image processing method includes:
s201, when detecting that a user is browsing a target image in a gallery, the electronic device acquires first eye tracking information of the user.
The gallery comprises a plurality of images which are stored in the electronic equipment, are shot at different types and different moments, and the target image can be any image in the process of browsing the images by the user. The electronic device may acquire first eye tracking information of the user through the eyeball tracking component, where the first eye tracking information is eye tracking information of the user collected when the user browses the target image, and the first eye tracking information may include a face image, an eye image, an eyeball image, and the like of the user.
S202, the electronic equipment determines the overall interest degree of the user for the target image according to the first eye tracking information, and determines the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree.
The method comprises the steps of firstly determining the overall interest degree of a user for a target image according to first eye tracking information of the user, and then determining the local interest degree of the user for each of a plurality of feature images which form a partitioned display of the target image and a target feature image with the highest local interest degree when the overall interest degree of the user for the target image is detected to be larger than a preset interest degree. For example, when the target image includes a face image of the user, a background image, a hat image worn by the user, other face images, and the like, the local interest of the user in the four feature images may be determined, and the target feature image with the highest local interest may be further determined to be the hat image.
S203, the electronic equipment searches a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushes the target image set to a user.
When the target feature image with the highest local interest degree in the target image is determined, the electronic device can search a plurality of images containing the target feature image in the image library by taking the target feature image as a retrieval condition, so that a target image set is formed and pushed to the user.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
In one possible example, the determining the overall interest level of the user in the target image according to the first eye tracking information includes: determining the annotation duration of the user on the target image according to the first eye tracking information; when the fact that the annotation duration is longer than a preset duration is detected, acquiring facial expression information of a user when the user browses the target image; and determining the overall interest degree of the user on the target image according to the facial expression information.
When a user browses a target graph, the overall interest degree of the user on the target image needs to be determined, and when the overall interest degree is detected to be larger than a preset interest degree, a target feature image with the highest local interest degree in the target image is determined. According to the first eye tracking information acquired when the user browses the target image, the annotation duration of the user to the target image can be determined, when the fact that the annotation duration of the user to the target image is larger than the preset duration is detected, the fact that the user is interested in the target image can be preliminarily determined, at the moment, facial expression information of the user browsing the target image is further acquired, and the overall interest degree of the user to the target image is determined by analyzing the facial expression information of the user. Since there are cases where the user may not be interested in the target image when the user annotates the target image for a long time, by analyzing the facial expressions of the user, for example, the user is a happy expression, a focused expression, a thinking expression, etc., it is indicated that the user's interest in the target image is high, the user's dull expression, a confused expression, etc., it is indicated that the user's interest in the target image is low.
As can be seen, in this example, the annotation duration of the user for the target image is determined by analyzing the first eye tracking information of the user, and when it is detected that the annotation duration is greater than the preset duration, it is preliminarily determined that the user is interested in the target image, and then, in combination with the expression information of the user when browsing the target image, the overall interest level of the user in the target image is further determined, which is beneficial to more accurately determine the overall interest level of the target image.
In one possible example, the determining the local interestingness of the plurality of feature videos displayed by the user for the target image partition and the target feature video with the highest local interestingness includes: determining a plurality of fixation points of a user aiming at the target image in browsing the target image according to the first eye tracking information; determining a characteristic image corresponding to each fixation point according to the display position of each fixation point in the plurality of fixation points; determining the number of fixation points corresponding to each characteristic image in the plurality of characteristic images and determining the local interest degree of each characteristic image according to the number of the fixation points; and determining the characteristic image with the largest number of injection points in the plurality of characteristic images as the target characteristic element.
The human eyes have the physiological characteristic that the perception from the central point to the peripheral vision is gradually blurred, so that a fixation point is generated in the process of browsing images by a user, and the fixation point is defined as a position point at which the fixation duration of the user eyes is longer than a certain duration, such as longer than 60 ms. Determining a plurality of fixation points of a user aiming at a target image, and determining a characteristic image corresponding to each fixation point in the target image according to the display position of each fixation point, thereby determining the number of the fixation points corresponding to each characteristic image in the plurality of characteristic images, and determining the local interest degree of each characteristic image according to the number of the fixation points, wherein the target characteristic is applied as a characteristic image with the highest local interest degree of the user, namely the characteristic image with the largest number of the corresponding fixation points.
For example, the target image is a person image, the person image includes a person, a landscape and a hat worn by the person, a plurality of fixation points, for example, 10 fixation points, of the task image in the south of the road of the user are determined, and it is found that, of the 10 fixation points, 2 fixation points are located corresponding to the person image, 3 fixation points are located corresponding to the landscape image, and 5 fixation points are located corresponding to the hat image, which indicates that the user has the highest local interest in the hat in the target image.
As can be seen, in this example, by obtaining a plurality of gaze points of a user when browsing a target image, and determining a feature image in the target image corresponding to each gaze point, a target feature image with the highest local interest of the user can be determined according to the number of gaze points corresponding to each feature image, so that a retrieval condition of a graph is located as the target feature image, which is beneficial to quickly determining a plurality of images including the feature image in a graph library according to the target feature image.
In one possible example, after detecting that the user is browsing the target image in the gallery and before acquiring the first eye tracking information of the user, the method further includes: acquiring blinking operation of a user for the target image being browsed; when the blinking operation is detected to meet a preset condition, an eyeball tracking device is started, first eye tracking information of a user is obtained through the eyeball tracking device, and the eyeball tracking device comprises infrared equipment and image acquisition equipment.
The electronic equipment acquires first eye tracking information of a user through the eyeball tracking device, and in the process of browsing images by the user, if the triggering operation of the eyeball tracking device is detected, the eyeball tracking device is started to acquire the eye tracking information. When the user browses the target image, the eyeball tracking device can be started through blinking operation, and therefore first eye tracking information of the user is acquired.
When it is detected that the blinking operation satisfies a predetermined condition, for example, blinking twice, the eye tracking apparatus may be activated to acquire first eye tracking information of the user. Eyeball tracking means includes infrared equipment and image acquisition equipment, acquires user's the operation of blinking, only needs the camera can acquire.
In this example, in the process of browsing images, the user may determine whether to start the eye tracking device to acquire eye tracking information of the user by detecting a blinking operation of the user and determining whether the blinking operation satisfies a preset condition, so as to locate a target feature image in the target image as a retrieval condition to retrieve a plurality of images in the gallery including the target feature image.
In one possible example, after obtaining the target image set and pushing the target image set to the user, the method further includes: acquiring a determined pushing operation input by a user; and displaying a plurality of images in the target image set after detecting page turning operation aiming at the target image.
After the target feature image is used as a retrieval condition, a plurality of images including the target feature image in the gallery are retrieved and a target image set is formed, the target image set can be pushed to a user, and after the determined pushing operation of the user is obtained, the plurality of images summarized by the target image set can be displayed after the page turning operation for the target image is detected, namely after the target image is displayed, the plurality of images in the target image set are directly displayed instead of the next image of the target image. For example, when it is detected that the user has the highest interest in a hat in the target images, a plurality of images including the hat in the map library may be retrieved to form a target image set, so that, after the target images are turned over, the user can view all the images stored in the map library wearing the hat.
Therefore, in this example, after the target image set is generated, the user directly views a plurality of images related to the target feature image in the target image by performing a page turning operation on the target image, which is beneficial for the user to quickly view the retrieval result and improves the continuity of image browsing.
In one possible example, the displaying, after detecting a page-turning operation for the target image, a plurality of images in the target image set includes: acquiring second eye tracking information of the user; determining page turning operation to be executed according to the second eye tracking information, wherein the page turning operation comprises forward page turning operation and backward page turning operation, the forward page turning operation corresponds to a first eyeball motion track, and the backward page turning operation corresponds to a second eyeball motion track; and when the page turning operation to be executed is detected to be a backward page turning operation, displaying a plurality of images in the target image set.
Before displaying the plurality of images in the target image set, second eye tracking information of the user is acquired, and page turning operation to be executed is determined according to the second eye tracking information. The eye movement trajectory of the user may be determined according to the second eye tracking information, for example, when the eye moves from left to right or from top to bottom corresponding to a backward page turning operation, and when the eye moves from right to left or from bottom to top corresponding to a forward page turning operation, when the target image is displayed, and when it is detected that the page turning operation to be performed is the backward page turning operation, a plurality of images in the target image set are displayed.
In the process of detecting and displaying a plurality of images in the target image set, the page turning forward and the page turning backward may be performed according to the acquired second eye tracking information of the user, and after the display is finished, the acquisition of the second eye tracking information of the user may be stopped.
As can be seen, in this example, the eye movement track of the user may be determined according to the second eye tracking information of the user, and the responsive page turning operation may be performed according to the eye movement track, so that, when displaying the plurality of images in the target image set, the user is not required to perform the page turning operation manually.
In one possible example, the displaying the plurality of images in the target image set includes: determining a capture time for each of the plurality of images; sequencing the plurality of images according to the shooting time; and generating an image video from the sequenced images, and playing the image video.
When a plurality of images in the target image set are displayed, the shooting time of each image can be determined, the image models of the plurality of images are sorted according to the sequence from the front to the back or from the back to the front of the shooting time, then when the plurality of images are displayed, an image video can be generated from the plurality of sorted images, and then the plurality of images are displayed in a video mode.
Therefore, in this example, a recall video is generated from a plurality of images in the target image set according to a time sequence, and the plurality of images acquired based on the target image are displayed to the user in a video form, so that the recall of the user is favorably awakened, and the ornamental value of the images is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application, and the image processing method is applied to an electronic device. As shown in the figure, the image processing method includes:
s301, when detecting that a user is browsing a target image in a gallery, the electronic device acquires first eye tracking information of the user.
S302, the electronic equipment determines the annotation duration of the target image by the user according to the first eye tracking information.
S303, when the electronic equipment detects that the annotation duration is longer than a preset duration, the electronic equipment acquires the facial expression information of the user when the user browses the target image.
S304, the electronic equipment determines the overall interest degree of the user in the target image according to the facial expression information.
S305, when detecting that the overall interest degree is greater than a preset interest degree, the electronic equipment determines the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree.
S306, the electronic equipment searches a plurality of images containing the target characteristic image in the gallery to obtain a target image set and pushes the target image set to a user.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
In addition, the annotation duration of the target image by the user is determined by analyzing the first eye tracking information of the user, the target image which is relatively interested by the user is preliminarily determined when the annotation duration is detected to be greater than the preset duration, and the overall interest level of the target image by the user is further determined by combining the expression information of the target image browsed by the user, so that the overall interest level of the target image can be more accurately determined.
Referring to fig. 4, fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure, and the image processing method is applied to an electronic device including a touch display screen, where the touch display screen includes a first display area and a second display area, the first display area does not have a fingerprint identification function, and the second display area has a fingerprint identification function. As shown in the figure, the image processing method includes:
s401, when detecting that a user is browsing a target image in a gallery, the electronic device acquires first eye tracking information of the user.
S402, the electronic equipment determines the annotation duration of the target image by the user according to the first eye tracking information.
S403, when the electronic equipment detects that the annotation duration is longer than a preset duration, acquiring facial expression information of a user when the user browses the target image.
S404, the electronic equipment determines the overall interest degree of the user in the target image according to the facial expression information.
S405, when the electronic device detects that the overall interest degree is larger than a preset interest degree, according to the first eye tracking information, determining a plurality of fixation points of a user for the target image in the target image.
S406, the electronic equipment determines the characteristic image corresponding to each fixation point according to the display position of each fixation point in the multiple fixation points.
S407, the electronic device determines the number of the fixation points corresponding to each feature image in the plurality of feature images and determines the local interest level of each feature image according to the number of the fixation points.
S408, the electronic equipment determines the feature image with the largest number of injection points in the plurality of feature images as the target feature element.
S409, the electronic equipment searches for a plurality of images containing the target characteristic image in the gallery to obtain a target image set and pushes the target image set to a user.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
In addition, the annotation duration of the target image by the user is determined by analyzing the first eye tracking information of the user, the target image which is relatively interested by the user is preliminarily determined when the annotation duration is detected to be greater than the preset duration, and the overall interest level of the target image by the user is further determined by combining the expression information of the target image browsed by the user, so that the overall interest level of the target image can be more accurately determined.
In addition, a plurality of fixation points of the user when browsing the target image are obtained, and the characteristic image corresponding to each fixation point in the target image is determined, so that the target characteristic image with the highest local interest degree of the user can be determined according to the number of the fixation points corresponding to each characteristic image, the retrieval condition of the graph is positioned as the target characteristic image, and the characteristic image included in the graph library can be rapidly determined according to the target characteristic image.
Consistent with the embodiments shown in fig. 2, fig. 3, and fig. 4, please refer to fig. 5, fig. 5 is a schematic structural diagram of an electronic device 500 provided in the embodiments of the present application, where the electronic device 500 runs one or more application programs and an operating system, as shown in the figure, the electronic device 500 includes a processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the processor 510, and the one or more programs 521 include instructions for performing the following steps;
when detecting that a user is browsing a target image in a gallery, acquiring first eye tracking information of the user;
determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree;
and searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
In one possible example, in the determining of the overall interest level of the user in the target image according to the first eye tracking information, the instructions in the program are specifically configured to: determining the annotation duration of the user on the target image according to the first eye tracking information; when the fact that the annotation duration is longer than a preset duration is detected, acquiring facial expression information of a user when the user browses the target image; and determining the overall interest degree of the user on the target image according to the facial expression information.
In one possible example, in the aspect of determining the local interest-degree of the plurality of feature images displayed by the user for the target image partition and the target feature image with the highest local interest-degree, the instructions in the program are specifically configured to perform the following operations: determining a plurality of fixation points of a user aiming at the target image in browsing the target image according to the first eye tracking information; determining a characteristic image corresponding to each fixation point according to the display position of each fixation point in the plurality of fixation points; determining the number of fixation points corresponding to each characteristic image in the plurality of characteristic images and determining the local interest degree of each characteristic image according to the number of the fixation points; and determining the characteristic image with the largest number of injection points in the plurality of characteristic images as the target characteristic element.
In one possible example, after detecting that the user is browsing the target image in the gallery and before acquiring the first eye tracking information of the user, the instructions in the program are specifically configured to: acquiring blinking operation of a user for the target image being browsed; when the blinking operation is detected to meet a preset condition, an eyeball tracking device is started, first eye tracking information of a user is obtained through the eyeball tracking device, and the eyeball tracking device comprises infrared equipment and image acquisition equipment.
In one possible example, after obtaining the target image set and pushing the target image set to the user, the instructions in the program are specifically configured to perform the following operations: acquiring a determined pushing operation input by a user; and displaying a plurality of images in the target image set after detecting page turning operation aiming at the target image.
In one possible example, in displaying the plurality of images in the target image set after the page-turning operation for the target image is detected, the instructions in the program are specifically configured to: acquiring second eye tracking information of the user; determining page turning operation to be executed according to the second eye tracking information, wherein the page turning operation comprises forward page turning operation and backward page turning operation, the forward page turning operation corresponds to a first eyeball motion track, and the backward page turning operation corresponds to a second eyeball motion track; and when the page turning operation to be executed is detected to be a backward page turning operation, displaying a plurality of images in the target image set.
In one possible example, in said displaying a plurality of images in the target image set, the instructions in the program are specifically configured to: determining a capture time for each of the plurality of images; sequencing the plurality of images according to the shooting time; and generating an image video from the sequenced images, and playing the image video.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one control unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 6 is a block diagram of functional units of an apparatus 600 according to an embodiment of the present application. The image processing apparatus 600 is applied to an electronic device, and the image processing apparatus 600 includes a processing unit 601 and a communication unit 602, where:
the processing unit 601 is configured to, when it is detected that a user is browsing a target image in a gallery, acquire first eye tracking information of the user through the communication unit 602; the target image processing device is used for determining the overall interest degree of a user on the target image according to the first eye tracking information, and determining the local interest degree of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and the image processing system is used for searching a plurality of images containing the target characteristic image in the image library, obtaining a target image set and pushing the target image set to a user.
It can be seen that, in the embodiment of the application, when it is detected that a user is browsing a target image in a gallery, the electronic device first obtains first eye tracking information of the user; secondly, determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and finally, searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user. When the user browses the target images in the gallery, the electronic equipment analyzes the overall interest degree of the user for the target images according to the acquired first eye tracking information of the user, and further analyzes the target feature images with the highest local interest degree in the target images in a refining manner when the overall interest degree of the user for the target images is higher, so that a target image set can be obtained according to the target feature images in the gallery including the target feature images, the user can search the images in the gallery including the target feature images by positioning the target feature images through eyes, and the image searching and browsing efficiency is improved.
In one possible example, in the aspect of determining the overall interest level of the user in the target image according to the first eye tracking information, the processing unit 601 is specifically configured to: determining the annotation duration of the user on the target image according to the first eye tracking information; the facial expression information acquisition unit is used for acquiring the facial expression information of the user when the target image is browsed by the user when the annotation duration is detected to be longer than the preset duration; and the system is used for determining the overall interest degree of the user on the target image according to the facial expression information.
In one possible example, in the aspect of determining the local interest-degree of the plurality of feature videos displayed by the user for the target image partition and the target feature video with the highest local interest-degree, the processing unit 601 is specifically configured to: determining a plurality of fixation points of a user aiming at the target image in browsing the target image according to the first eye tracking information; the characteristic image corresponding to each fixation point is determined according to the display position of each fixation point in the multiple fixation points; the local interest degree determining module is used for determining the number of the fixation points corresponding to each characteristic image in the plurality of characteristic images and determining the local interest degree of each characteristic image according to the number of the fixation points; and the characteristic images used for determining the most injection points in the plurality of characteristic images are the target characteristic elements.
In one possible example, after the detecting that the user is browsing the target image in the gallery and before the acquiring the first eye tracking information of the user, the processing unit 601 is specifically configured to: acquiring blinking operation of a user for the target image being browsed; and the eye tracking device is started and first eye tracking information of the user is acquired through the eye tracking device when the eye blinking operation is detected to meet a preset condition, wherein the eye tracking device comprises infrared equipment and image acquisition equipment.
In a possible example, after obtaining the target image set and pushing the target image set to the user, the processing unit 601 is specifically configured to: acquiring a determined pushing operation input by a user; and the display unit is used for displaying a plurality of images in the target image set after detecting page turning operation aiming at the target image.
In a possible example, in terms of displaying a plurality of images in the target image set after the page-turning operation for the target image is detected, the processing unit 601 is specifically configured to: acquiring second eye tracking information of the user; the eye tracking module is used for determining page turning operation to be executed according to the second eye tracking information, wherein the page turning operation comprises forward page turning operation and backward page turning operation, the forward page turning operation corresponds to a first eye movement track, and the backward page turning operation corresponds to a second eye movement track; and the display unit is used for displaying a plurality of images in the target image set when the page turning operation to be executed is detected to be a backward page turning operation.
In one possible example, in terms of displaying the plurality of images in the target image set, the processing unit 601 is specifically configured to: determining a capture time for each of the plurality of images; and for sorting the plurality of images according to the shooting time; and the image video is generated by the sequenced images and played.
Wherein, the electronic device may further include a storage unit 603, the processing unit 601 and the communication unit 602 may be a controller or a processor, and the storage unit 603 may be a memory.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated into one control unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
when detecting that a user is browsing a target image in a gallery, acquiring first eye tracking information of the user;
determining the overall interest degree of the user for the target image according to the first eye tracking information, and determining the local interest degrees of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree;
and searching a plurality of images containing the target characteristic image in a gallery to obtain a target image set and pushing the target image set to a user.
2. The method of claim 1, wherein determining the overall interest level of the user in the target image according to the first eye tracking information comprises:
determining the annotation duration of the user on the target image according to the first eye tracking information;
when the fact that the annotation duration is longer than a preset duration is detected, acquiring facial expression information of a user when the user browses the target image;
and determining the overall interest degree of the user on the target image according to the facial expression information.
3. The method according to claim 1 or 2, wherein the determining the local interest-degree of the plurality of feature images displayed by the user for the target image partition and the target feature image with the highest local interest-degree comprises:
determining a plurality of fixation points of a user aiming at the target image in browsing the target image according to the first eye tracking information;
determining a characteristic image corresponding to each fixation point according to the display position of each fixation point in the plurality of fixation points;
determining the number of fixation points corresponding to each characteristic image in the plurality of characteristic images and determining the local interest degree of each characteristic image according to the number of the fixation points;
and determining the characteristic image with the largest number of injection points in the plurality of characteristic images as the target characteristic element.
4. The method according to any one of claims 1-3, wherein after detecting that the user is browsing the target image in the gallery and before acquiring the first eye tracking information of the user, the method further comprises:
acquiring blinking operation of a user for the target image being browsed;
when the blinking operation is detected to meet a preset condition, an eyeball tracking device is started, first eye tracking information of a user is obtained through the eyeball tracking device, and the eyeball tracking device comprises infrared equipment and image acquisition equipment.
5. The method of claim 1, wherein after obtaining and pushing the target image set to the user, the method further comprises:
acquiring a determined pushing operation input by a user;
and displaying a plurality of images in the target image set after detecting page turning operation aiming at the target image.
6. The method of claim 5, wherein displaying the plurality of images in the set of target images after detecting a page-turning operation for the target image comprises:
acquiring second eye tracking information of the user;
determining page turning operation to be executed according to the second eye tracking information, wherein the page turning operation comprises forward page turning operation and backward page turning operation, the forward page turning operation corresponds to a first eyeball motion track, and the backward page turning operation corresponds to a second eyeball motion track;
and when the page turning operation to be executed is detected to be a backward page turning operation, displaying a plurality of images in the target image set.
7. The method of claim 5, wherein the displaying the plurality of images in the target image set comprises:
determining a capture time for each of the plurality of images;
sequencing the plurality of images according to the shooting time;
and generating an image video from the sequenced images, and playing the image video.
8. An image processing apparatus applied to an electronic device, the image processing apparatus including a processing unit and a communication unit, wherein,
the processing unit is used for acquiring first eye tracking information of the user through the communication unit when the fact that the user is browsing a target image in the gallery is detected; the target image processing device is used for determining the overall interest degree of a user on the target image according to the first eye tracking information, and determining the local interest degree of a plurality of feature images displayed by the user in a partition mode aiming at the target image and the target feature image with the highest local interest degree when the overall interest degree is detected to be larger than a preset interest degree; and the image processing system is used for searching a plurality of images containing the target characteristic image in the image library, obtaining a target image set and pushing the target image set to a user.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN201910503401.1A 2019-06-11 2019-06-11 Image processing method and relevant apparatus Pending CN110245250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910503401.1A CN110245250A (en) 2019-06-11 2019-06-11 Image processing method and relevant apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910503401.1A CN110245250A (en) 2019-06-11 2019-06-11 Image processing method and relevant apparatus

Publications (1)

Publication Number Publication Date
CN110245250A true CN110245250A (en) 2019-09-17

Family

ID=67886577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910503401.1A Pending CN110245250A (en) 2019-06-11 2019-06-11 Image processing method and relevant apparatus

Country Status (1)

Country Link
CN (1) CN110245250A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309146A (en) * 2020-02-10 2020-06-19 Oppo广东移动通信有限公司 Image display method and related product
CN111510626A (en) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 Image synthesis method and related device
CN111580671A (en) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 Video image processing method and related device
CN112580409A (en) * 2019-09-30 2021-03-30 Oppo广东移动通信有限公司 Target object selection method and related product
CN112861633A (en) * 2021-01-08 2021-05-28 广州朗国电子科技有限公司 Image recognition method and device based on machine learning and storage medium
CN113849142A (en) * 2021-09-26 2021-12-28 深圳市火乐科技发展有限公司 Image display method and device, electronic equipment and computer readable storage medium
CN116828099A (en) * 2023-08-29 2023-09-29 荣耀终端有限公司 Shooting method, medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426399A (en) * 2015-10-29 2016-03-23 天津大学 Eye movement based interactive image retrieval method for extracting image area of interest
CN107111629A (en) * 2014-10-30 2017-08-29 四提拓有限公司 The method and system of object interested for detecting
US9946795B2 (en) * 2014-01-27 2018-04-17 Fujitsu Limited User modeling with salience
CN108921585A (en) * 2018-05-15 2018-11-30 北京七鑫易维信息技术有限公司 A kind of advertisement sending method, device, equipment and storage medium
CN109151338A (en) * 2018-07-10 2019-01-04 Oppo广东移动通信有限公司 Image processing method and related product
CN109726713A (en) * 2018-12-03 2019-05-07 东南大学 User's area-of-interest detection system and method based on consumer level Eye-controlling focus instrument

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946795B2 (en) * 2014-01-27 2018-04-17 Fujitsu Limited User modeling with salience
CN107111629A (en) * 2014-10-30 2017-08-29 四提拓有限公司 The method and system of object interested for detecting
CN105426399A (en) * 2015-10-29 2016-03-23 天津大学 Eye movement based interactive image retrieval method for extracting image area of interest
CN108921585A (en) * 2018-05-15 2018-11-30 北京七鑫易维信息技术有限公司 A kind of advertisement sending method, device, equipment and storage medium
CN109151338A (en) * 2018-07-10 2019-01-04 Oppo广东移动通信有限公司 Image processing method and related product
CN109726713A (en) * 2018-12-03 2019-05-07 东南大学 User's area-of-interest detection system and method based on consumer level Eye-controlling focus instrument

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580409A (en) * 2019-09-30 2021-03-30 Oppo广东移动通信有限公司 Target object selection method and related product
CN112580409B (en) * 2019-09-30 2024-06-07 Oppo广东移动通信有限公司 Target object selection method and related product
EP4075240A4 (en) * 2020-02-10 2023-08-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image display method and related product
CN111309146A (en) * 2020-02-10 2020-06-19 Oppo广东移动通信有限公司 Image display method and related product
WO2021159935A1 (en) * 2020-02-10 2021-08-19 Oppo广东移动通信有限公司 Image display method and related product
CN111309146B (en) * 2020-02-10 2022-03-29 Oppo广东移动通信有限公司 Image display method and related product
CN111510626A (en) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 Image synthesis method and related device
WO2021213031A1 (en) * 2020-04-21 2021-10-28 Oppo广东移动通信有限公司 Image synthesis method and related apparatus
CN111510626B (en) * 2020-04-21 2022-01-04 Oppo广东移动通信有限公司 Image synthesis method and related device
EP4135308A4 (en) * 2020-04-21 2023-10-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image synthesis method and related apparatus
CN111580671A (en) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 Video image processing method and related device
CN112861633A (en) * 2021-01-08 2021-05-28 广州朗国电子科技有限公司 Image recognition method and device based on machine learning and storage medium
CN112861633B (en) * 2021-01-08 2022-05-31 广州朗国电子科技股份有限公司 Image recognition method and device based on machine learning and storage medium
CN113849142B (en) * 2021-09-26 2024-05-28 深圳市火乐科技发展有限公司 Image display method, device, electronic equipment and computer readable storage medium
CN113849142A (en) * 2021-09-26 2021-12-28 深圳市火乐科技发展有限公司 Image display method and device, electronic equipment and computer readable storage medium
CN116828099A (en) * 2023-08-29 2023-09-29 荣耀终端有限公司 Shooting method, medium and electronic equipment
CN116828099B (en) * 2023-08-29 2023-12-19 荣耀终端有限公司 Shooting method, medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110245250A (en) Image processing method and relevant apparatus
CN110248241B (en) Video processing method and related device
CN109407936B (en) Screenshot method and related device
CN111314759B (en) Video processing method and device, electronic equipment and storage medium
CN110231963B (en) Application control method and related device
CN108668072B (en) Mobile terminal, photographing method and related product
CN111309146B (en) Image display method and related product
CN115176456B (en) Content operation method, device, terminal and storage medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN110262659B (en) Application control method and related device
CN107590474B (en) Unlocking control method and related product
CN105892635A (en) Image capture realization method and apparatus as well as electronic device
US20220383523A1 (en) Hand tracking method, device and system
US11321886B2 (en) Apparatus and associated methods
CN110308860B (en) Screen capturing method and related device
CN106096043B (en) A kind of photographic method and mobile terminal
CN105933772A (en) Interaction method, interaction apparatus and interaction system
CN104869317B (en) Smart machine image pickup method and device
CN107391608B (en) Picture display method and device, storage medium and electronic equipment
CN107728877B (en) Application recommendation method and mobile terminal
CN108401173A (en) Interactive terminal, method and the computer readable storage medium of mobile live streaming
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN108009273B (en) Image display method, image display device and computer-readable storage medium
CN109658328A (en) From animal head ear processing method and the Related product of shooting the video
CN109284060A (en) Display control method and relevant apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190917

RJ01 Rejection of invention patent application after publication