CN111475677A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111475677A
CN111475677A CN202010365488.3A CN202010365488A CN111475677A CN 111475677 A CN111475677 A CN 111475677A CN 202010365488 A CN202010365488 A CN 202010365488A CN 111475677 A CN111475677 A CN 111475677A
Authority
CN
China
Prior art keywords
image
video
information
playing time
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010365488.3A
Other languages
Chinese (zh)
Inventor
吴恒刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010365488.3A priority Critical patent/CN111475677A/en
Publication of CN111475677A publication Critical patent/CN111475677A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an image processing method, an image processing device, a storage medium and an electronic device. The image processing method comprises the following steps: acquiring a first image; extracting image features of the first image; searching whether a second image matched with the first image exists in a preset database, wherein the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video; and if a second image matched with the first image is searched, displaying the information of the playing time of the second image in the corresponding video, wherein the image characteristics of the first image and the second image are matched with each other. According to the method and the device, the accuracy of searching the video by the electronic equipment according to the picture can be improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application belongs to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
Background
As the image function of the electronic device is more and more powerful, users have more and more frequently used the electronic device to take various images, such as taking pictures or recording videos. In some scenarios, the user may also download various picture and video files from the network and store them in the electronic device. However, in the related art, the accuracy of searching for a video from a picture by an electronic device is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and electronic equipment, which can improve the accuracy of searching videos by the electronic equipment according to pictures.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring a first image;
extracting image features of the first image;
searching whether a second image matched with the first image exists in a preset database, wherein the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video;
and if a second image matched with the first image is searched, displaying the information of the playing time of the second image in the corresponding video, wherein the image characteristics of the first image and the second image are matched with each other.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a first image;
the extraction module is used for extracting image features of the first image;
the searching module is used for searching whether a second image matched with the first image exists in a preset database, the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video;
and the display module is used for displaying the information of the playing time of the second image in the corresponding video if the second image matched with the first image is searched, and the image characteristics of the first image and the second image are matched with each other.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute an image processing method provided by an embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the image processing method provided in the embodiment of the present application by calling a computer program stored in the memory.
In the embodiment of the application, the electronic device may acquire the first image and extract the image feature of the first image. Thereafter, the electronic device may search the preset database for the presence of a second image matching the image features of the first image. The preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video. If a second image matching the first image is searched in the preset database, the electronic device can display the playing time information of the second image in the corresponding video. According to the method and the device, the playing time information of the second image matched with the first image in the corresponding video can be directly displayed to the user for viewing, so that the user can conveniently find the second image from the corresponding video directly according to the playing time. Therefore, the method and the device for searching the video by the electronic equipment can improve the accuracy of searching the video by the electronic equipment according to the image.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 3 to fig. 8 are scene schematic diagrams of an image processing method according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 11 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
It is understood that the execution subject of the embodiment of the present application may be an electronic device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
101. a first image is acquired.
As the image function of the electronic device is more and more powerful, users have more and more frequently used the electronic device to take various images, such as taking pictures or recording videos. In some scenarios, the user may also download various picture and video files from the network and store them in the electronic device. However, in the related art, the accuracy of searching for a video from a picture by an electronic device is poor.
In this embodiment, for example, the electronic device may first acquire the first image.
102. Image features of the first image are extracted.
For example, after acquiring the first image, the electronic device may extract image features of the first image.
103. Searching whether a second image matched with the first image exists in a preset database, wherein the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from the video, and the image information of each image comprises image characteristics and the playing time of the image in the video.
For example, after extracting the image features of the first image, the electronic device may search the preset database for whether a second image matching the first image exists. The preset database may include a plurality of images and image information corresponding to each image. Wherein each image may be an image of a video frame extracted from a video. The image information corresponding to each image may include image characteristics of the image and a playing time of the image in the corresponding video.
For example, a video is composed of one frame by one frame of images, and each frame of images may be referred to as a video frame. In the embodiment of the application, the electronic device can extract each frame of image forming the video, extract the image characteristics of each frame of image, and acquire the playing time of each frame of image in the video. The playing time of the image of the video frame in the video may refer to that the video plays the image corresponding to the video frame in the second time.
For example, a certain video A plays video frame A at 120 th second120Corresponding image, then the video frame A120The playing time in the video is 120 th second.
In one embodiment, each sample in the preset database may be represented as follows: image identification-image feature-image playing time in the corresponding video. With the above-mentioned image A120For example, it can be expressed as: image A120Image A120Image feature-120 th second in video a.
If a second image matching the first image is searched in the preset database, the process proceeds to 104.
If the second image matching the first image is not searched in the preset database, the electronic device may perform other operations. For example, the electronic device may prompt the user to not search for an image that matches the first image, and so on.
104. And if a second image matched with the first image is searched, displaying the information of the playing time of the second image in the corresponding video, wherein the image characteristics of the first image and the second image are matched with each other.
For example, if the electronic device searches a preset database for a second image matching the first image, the electronic device may display information of a playing time of the second image in the corresponding video. Wherein the image features of the first image and the second image may be matched to each other.
For example, the first image is B. According to the image characteristics of the first image B, the electronic equipment searches the image A in a preset database120Is matched with the image features of the first image B, the electronic device may then match the image a with the image features of the first image B120Determined as a second image and displaying image A120The playback time in video a is 120 th second information.
It can be understood that, in the embodiment of the present application, the electronic device may acquire the first image and extract the image feature of the first image. Thereafter, the electronic device may search the preset database for the presence of a second image matching the image features of the first image. The preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video. If a second image matching the first image is searched in the preset database, the electronic device can display the playing time information of the second image in the corresponding video. According to the method and the device, the playing time information of the second image matched with the first image in the corresponding video can be directly displayed to the user for viewing, so that the user can conveniently find the second image from the corresponding video directly according to the playing time. Therefore, the method and the device for searching the video by the electronic equipment can improve the accuracy of searching the video by the electronic equipment according to the image.
Referring to fig. 2, fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application, where the flow chart may include:
201. the electronic device acquires a first image.
202. The electronic device extracts image features of the first image.
For example, 201 and 202 may include:
for example, a user needs to search for a video from which an image is to be displayed, and the user may input the image into the electronic device first, that is, the electronic device may acquire the image, for example, a first image, and extract image features of the first image.
In one embodiment, the image features of the first image extracted by the electronic device may include feature values and feature vectors of the first image.
203. The electronic equipment searches whether a second image matched with the first image exists in a preset database, the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from the video, and the image information of each image comprises image characteristics and playing time of the image in the video.
For example, after extracting the image feature of the first image, the electronic device may search the preset database for whether a second image matching the first image exists according to the image feature of the first image. The preset database may include a plurality of images and image information corresponding to each image. Each image in the preset database may be a video frame extracted from a certain video. The image information of each image may include information of image characteristics and a playing time of the image in the video.
For example, a video is composed of one frame by one frame of images, and each frame of images may be referred to as a video frame. In the embodiment of the application, the electronic device can extract each frame of image forming the video, extract the image characteristics of each frame of image, and acquire the playing time of each frame of image in the video. The playing time of the image of the video frame in the video may refer to that the video plays the image corresponding to the video frame in the second time.
For example, a certain video A plays video frame A at 120 th second120Corresponding image, then the video frame A120The playing time in the video is 120 th second.
In one embodiment, each sample in the preset database may be expressed as followsForm (a): image identification-image feature-image playing time in the corresponding video. With the above-mentioned image A120For example, it can be expressed as: image A120Image A120Image feature-120 th second in video a.
In one embodiment, the image features may include feature values of the image and feature vectors. Then, the electronic device may search the preset database for the presence of a second image matching the feature values and/or the feature vectors of the first image according to the feature values and/or the feature vectors of the first image.
If a second image matching the first image is found in the predetermined database, the process may proceed to 204.
If the second image matching the first image is not searched in the preset database, the electronic device may perform other operations. For example, the electronic device may prompt the user to not search for an image that matches the first image, and so on.
In some embodiments, the preset database may be a database (dataset) formed from data stored locally by the electronic device. For example, the electronic device may parse videos stored in the local area one by one, so as to obtain an image corresponding to each video frame in each video and a playing time of the image in the corresponding video, and then extract image features of the image. Then, the electronic device may form a piece of sample data according to each image and its image features and the playing time of the image in the corresponding video. According to the obtained sample data, the electronic equipment can construct a database and determine the database as a preset database.
Alternatively, the cloud device may also construct a database (data set) based on data stored in the cloud device in the above manner. The electronic device can download the database from the cloud device.
Or, there may be one database on each of the cloud device and the electronic device, so that when a user needs to search for a video according to a picture by using the method provided in the embodiment of the present application, when the picture input by the user is received, on one hand, the electronic device may search in the local database according to the picture, and on the other hand, the electronic device may also upload the picture to the cloud device, and the cloud device searches in its database. That is, the electronic device and the cloud device may search for videos according to the pictures input by the user at the same time.
204. And if a second image matched with the first image is searched, the electronic equipment displays the information of the playing time of the second image in the corresponding video, and the image characteristics of the first image and the second image are matched with each other.
For example, the first image is B. According to the image characteristics of the first image B, the electronic equipment searches the image A in a preset database120Is matched with the image features of the first image B, the electronic device may then match the image a with the image features of the first image B120Determining as a second image, and imaging A120The information of the 120 th second playing time in the video A is displayed for the user to view.
In some implementations, the image features may include one or more of feature values, feature vectors, color features, texture features, shape features, spatial relationship features.
It should be noted that a color feature is a global feature that describes the surface properties of a scene corresponding to an image or an image area. Generally, the color feature is based on the feature of a pixel point, and all pixels belonging to an image or an image area have respective contributions. The color features may include color histograms, color sets, color moments, color aggregation vectors, color correlation maps.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. The texture characteristics are mainly as follows: gray level co-occurrence matrix, Tamura texture features, etc. The extraction and matching of the gray level co-occurrence matrix characteristics mainly depend on four parameters of energy, inertia, entropy and correlation. Tamura texture features are based on human psychology study of visual perception of texture, including 6 attributes, namely: roughness, contrast, orientation, line image, regularity, and coarseness.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship in the spatial relationship feature refers to the mutual spatial position or relative direction relationship among a plurality of targets segmented in the image, and these relationships can also be divided into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. There are two methods for extracting the image space relation features: one method is to automatically segment the image, to segment out the object or color regions contained in the image, to extract the image features according to these regions, and to build the index. Another approach simply divides the image evenly into regular sub-blocks, then extracts features for each image sub-block and builds an index.
When it is required to detect whether the two images match, the electronic device may extract one or more of a feature value, a feature vector, a color feature, a texture feature, a shape feature, and a spatial relationship feature of the two images, and then perform matching detection on the two images based on the extracted features. If the matching degree of the image features extracted from the two images is greater than or equal to the set threshold value, the two images can be determined to be matched. If the degree of matching of the image features extracted from the two images is smaller than a set threshold, it can be determined that the two images do not match.
In the case where the image features include feature values and/or feature vectors, the matching of the image features of the first image and the second image with each other may include:
the matching rate of the characteristic value of the first image and the characteristic value of the second image is greater than or equal to a preset first threshold;
and/or the matching rate of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset second threshold;
and/or the matching number of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset third threshold value.
For example, in a case where the extracted image features include only feature values, the electronic device may determine that the image features of the first image and the second image match each other when a matching rate of the feature values of the first image and the feature values of the second image is greater than or equal to a preset first threshold.
Alternatively, in the case where the extracted image features include only the feature vector, the electronic device may determine that the image features of the first image and the second image match each other when a matching rate of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset second threshold.
Alternatively, in a case where the extracted image features include feature values and feature vectors, when a matching rate of the feature values of the first image and the feature values of the second image is greater than or equal to a preset first threshold and a matching rate of the feature vectors of the first image and the feature vectors of the second image is greater than or equal to a preset second threshold, the electronic device may determine that the image features of the first image and the second image match each other.
Alternatively, in a case where the extracted image features include feature values and feature vectors, when a matching rate of the feature values of the first image and the feature values of the second image is greater than or equal to a preset first threshold and a matching number of the feature vectors of the first image and the second image is greater than or equal to a preset third threshold, the electronic device may determine that the image features of the first image and the second image match each other.
Alternatively, in the case where the extracted image features include feature values and feature vectors, when a matching rate of the feature values of the first image and the feature values of the second image is greater than or equal to a preset first threshold, and a matching rate of the feature vectors of the first image and the second image is greater than or equal to a preset second threshold, and a matching number of the feature vectors of the first image and the second image is greater than or equal to a preset third threshold, the electronic device may determine that the image features of the first image and the second image match each other.
205. After receiving a playing instruction, the electronic device starts to play the corresponding video from the playing time of the second image in the corresponding video, wherein the playing instruction is a trigger operation on target information, and the target information is information of the playing time of the displayed second image in the corresponding video.
For example, in one embodiment, the information of the playing time of the second image displayed by the electronic device in the corresponding video may be triggerable (e.g., clickable). The user may click on the presented information. When the user clicks the displayed information, the user indicates that the user wants to play the video corresponding to the second image. Therefore, after receiving the playing instruction, the electronic device may start playing the corresponding video from the playing time of the second image in the corresponding video. That is, the playing instruction may be a trigger operation on target information, where the target information is information of playing time of the second image displayed by the electronic device in the corresponding video.
For example, the electronic device presents a second image A to the user120The playback time in the corresponding video a is 120 th second. Then, when the user clicks on the presented information, the electronic device may play video a from the 120 th second of video a, i.e., the electronic device may play video a from video frame a of video a120Video a begins to play.
206. And when the searched video corresponding to the second image is determined to be the video correctly matched with the first image, the electronic equipment determines the playing time information of the second image in the corresponding video as the target playing time information.
207. And according to the first image, the image characteristics of the first image and the target playing time information, the electronic equipment generates new sample data.
208. And the electronic equipment adds the new sample data into a preset database.
For example, 206, 207, 208 may include:
after the electronic device starts playing the corresponding video from the playing time of the second image in the corresponding video, the electronic device may further query the user whether the video of the second image searched for is a video that correctly matches the first image.
If the video corresponding to the searched second image is received from the user and is a video correctly matched with the first image, that is, the video corresponding to the second image is the video that the user wants to search according to the first image, the electronic device may determine the information of the playing time of the second image in the corresponding video as the target playing time information. For example, the electronic device may determine "120 th second in video a" as the target play time information.
Then, the electronic device may generate a new piece of sample data according to the first image, the image feature of the first image, and the target playing time information. For example, the new sample data may be represented as: first image B-image features of first image B-120 seconds in video a.
After new sample data is generated, the electronic equipment can add the new sample data into the preset database, so that the sample size of the preset database is enriched, and the self-updating of the preset database is realized.
It can be understood that, since the first image is an image matched with the first image, the first image and the second image can be considered to be similar images, and therefore, the information of the playing time of the second image in the corresponding video can also be applied to the first image, so as to construct a new sample data, and add the new sample data to the preset database, thereby enriching the preset database.
In some embodiments, the preset database may be a database formed of data stored locally by the electronic device. For example, the electronic device may parse videos stored in the local area one by one, so as to obtain an image corresponding to each video frame in each video and a playing time of the image in the corresponding video, and then extract image features of the image. Then, the electronic device may form a piece of sample data according to each image and its image features and the playing time of the image in the corresponding video. According to the obtained sample data, the electronic equipment can construct a database and determine the database as a preset database.
Alternatively, the cloud device may also construct a database based on data stored in the cloud device in the above manner. The electronic device can download the database from the cloud device.
Or, there may be one database on each of the cloud device and the electronic device, so that when a user needs to search for a video according to a picture by using the method provided in the embodiment of the present application, when the picture input by the user is received, on one hand, the electronic device may search in the local database according to the picture, and on the other hand, the electronic device may also upload the picture to the cloud device, and the cloud device searches in its database. That is, the electronic device and the cloud device may search for videos according to the pictures input by the user at the same time.
Referring to fig. 3 to 8, fig. 3 to 8 are schematic scene diagrams of an image processing method according to an embodiment of the present application.
For example, when a user needs to search which video the picture belongs to according to a picture, the user may input the picture to be searched into the electronic device first, as shown in fig. 3. For example, the picture obtained by the electronic device and input by the user is P.
After the picture P input by the user is acquired, the electronic device may extract image features of the picture P. For example, the electronic device may extract feature values and feature vectors of picture P.
Then, the electronic device may search, according to the feature value and the feature vector of the picture P, in a preset database, whether a picture matching the picture P exists. The preset database may include a plurality of pictures and picture information corresponding to each picture, where the pictures may be video frames extracted from each video. The picture information of each picture may include a feature value and a feature vector corresponding to the picture and a playing time of the picture in the corresponding video. For example, when the electronic device searches the preset database for whether there is a picture matching the picture P, the interface of the electronic device may be as shown in fig. 4.
If a picture matching the picture P is not searched in the preset database, the electronic device may prompt the user that a matching picture is not searched.
If a picture matching with the picture P is searched in the preset database, for example, the electronic device searches in the preset database that the matching rate of the feature value of the picture Q and the feature value of the picture P is greater than a preset first threshold, and the matching rate of the feature vector of the picture Q and the feature vector of the picture P is greater than a preset second threshold. Then, the electronic device may determine picture Q as a picture matching picture P. At this time, the electronic device may present information of the playing time of the picture Q in the corresponding video. For example, as shown in fig. 5, the electronic device may display the following information in the interface: and searching a picture Q matched with the picture P, wherein the playing time of the picture Q in the video V is 120 th second. Here, the information that "the playing time of the picture Q in the video V is 120 th second" is information that can be clicked by the user.
For example, the user clicks the information "the playback time of the picture Q in the video V is 120 seconds", as shown in fig. 6. Then, the electronic device may play the video V from the 120 th second of the video V, so that the user can confirm whether the searched video V is the correct source of the picture P, as shown in fig. 7.
At a later time, the electronic device may generate an inquiry message for inquiring whether the video V searched by the electronic device is the video corresponding to the picture P, as shown in fig. 8. For example, according to the query information, the video V that the electronic device receives the user feedback is the video corresponding to the picture P. That is, the electronic device correctly searches which video the picture P is from. In this case, the electronic device may acquire information that "the playing time in the video V is 120 seconds", generate a new piece of sample data according to the picture P, the feature value and the feature vector of the picture P, and the information that "the playing time in the video V is 120 seconds", and add the new piece of sample data to the preset database, thereby enriching samples of the preset database.
It can be understood that, in the embodiment of the present application, the electronic device may search for a corresponding video according to the picture, and start playing from the playing time of the picture in the video, so that the accuracy of the electronic device searching for the video according to the picture may be improved, the video searching manner of the electronic device is effectively enriched, and the user experience is improved.
Compared with the traditional mode of searching videos according to information such as characters or voice, the embodiment of the application can search the corresponding videos according to the pictures, so that the embodiment of the application provides more intuitive and convenient searching experience for users.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 300 may include: the system comprises an acquisition module 301, an extraction module 302, a search module 303 and a presentation module 304.
An acquiring module 301, configured to acquire a first image.
An extracting module 302, configured to extract an image feature of the first image.
The searching module 303 is configured to search a preset database for whether a second image matching the first image exists, where the preset database includes a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image includes image features and a playing time of the image in the video.
A displaying module 304, configured to display, if a second image matching the first image is found, information of a playing time of the second image in a corresponding video, where image features of the first image and the second image are matched with each other.
In one embodiment, the display module 304 may be further configured to:
after a playing instruction is received, the corresponding video is played from the playing time of the second image in the corresponding video, the playing instruction is a trigger operation on target information, and the target information is information of the playing time of the displayed second image in the corresponding video.
In one embodiment, the image feature includes one or more of a feature value, a feature vector, a color feature, a texture feature, a shape feature, and a spatial relationship feature of the image.
In one embodiment, matching image features of the first image and the second image may include:
the matching rate of the characteristic value of the first image and the characteristic value of the second image is greater than or equal to a preset first threshold value;
and/or the matching rate of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset second threshold;
and/or the matching number of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset third threshold value.
In one embodiment, the display module 304 may be further configured to:
when the searched video corresponding to the second image is determined to be the video correctly matched with the first image, determining the information of the playing time of the second image in the corresponding video as target playing time information;
generating new sample data according to the first image, the image characteristics of the first image and the target playing time information;
and adding the new sample data into the preset database.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to execute the flow in the image processing method provided by this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the image processing method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include a touch display 401, memory 402, a processor 403, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 10 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The touch display screen 401 can be used to display information such as characters, images, and the like, and can also be used to receive a touch operation from a user and the like.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a first image;
extracting image features of the first image;
searching whether a second image matched with the first image exists in a preset database, wherein the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video;
and if a second image matched with the first image is searched, displaying the information of the playing time of the second image in the corresponding video, wherein the image characteristics of the first image and the second image are matched with each other.
Referring to fig. 11, the electronic device 400 may include a touch screen 401, a memory 402, a processor 403, a battery 404, a microphone 405, a speaker 406, and other components.
The touch display screen 401 can be used to display information such as characters, images, and the like, and can also be used to receive a touch operation from a user and the like.
The memory 402 may be used to store applications and data. The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
The battery 405 may be used to provide power support for various components and modules of the electronic device, thereby ensuring that the electronic device can operate properly.
The microphone 406 may be used to capture acoustic signals in the surrounding environment, such as to capture voice commands issued by a user, etc.
The speaker 407 may be used to output sound signals.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a first image;
extracting image features of the first image;
searching whether a second image matched with the first image exists in a preset database, wherein the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video;
and if a second image matched with the first image is searched, displaying the information of the playing time of the second image in the corresponding video, wherein the image characteristics of the first image and the second image are matched with each other.
In one embodiment, the processor 403 may further perform: after a playing instruction is received, the corresponding video is played from the playing time of the second image in the corresponding video, the playing instruction is a trigger operation on target information, and the target information is information of the playing time of the displayed second image in the corresponding video.
In one embodiment, the image feature includes one or more of a feature value, a feature vector, a color feature, a texture feature, a shape feature, and a spatial relationship feature of the image.
In one embodiment, matching image features of the first image and the second image may include: the matching rate of the characteristic value of the first image and the characteristic value of the second image is greater than or equal to a preset first threshold value; and/or the matching rate of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset second threshold; and/or the matching number of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset third threshold value.
In one embodiment, the processor 403 may be further configured to: when the searched video corresponding to the second image is determined to be the video correctly matched with the first image, determining the information of the playing time of the second image in the corresponding video as target playing time information; generating new sample data according to the first image, the image characteristics of the first image and the target playing time information; and adding the new sample data into the preset database.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image processing method, and are not described herein again.
The image processing apparatus provided in the embodiment of the present application and the image processing method in the above embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be run on the image processing apparatus, and a specific implementation process thereof is described in the embodiment of the image processing method in detail, and is not described herein again.
It should be noted that, for the image processing method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the image processing method described in the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the image processing method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a first image;
extracting image features of the first image;
searching whether a second image matched with the first image exists in a preset database, wherein the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video;
and if a second image matched with the first image is searched, displaying the information of the playing time of the second image in the corresponding video, wherein the image characteristics of the first image and the second image are matched with each other.
2. The image processing method according to claim 1, characterized in that the method further comprises:
after a playing instruction is received, the corresponding video is played from the playing time of the second image in the corresponding video, the playing instruction is a trigger operation on target information, and the target information is information of the playing time of the displayed second image in the corresponding video.
3. The image processing method according to claim 1, wherein the image feature comprises one or more of a feature value, a feature vector, a color feature, a texture feature, a shape feature, and a spatial relationship feature of an image.
4. The image processing method according to claim 3, wherein the image features of the first image and the second image match each other, comprising:
the matching rate of the characteristic value of the first image and the characteristic value of the second image is greater than or equal to a preset first threshold value;
and/or the matching rate of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset second threshold;
and/or the matching number of the feature vector of the first image and the feature vector of the second image is greater than or equal to a preset third threshold value.
5. The image processing method according to claim 1, characterized in that the method further comprises:
when the searched video corresponding to the second image is determined to be the video correctly matched with the first image, determining the information of the playing time of the second image in the corresponding video as target playing time information;
generating new sample data according to the first image, the image characteristics of the first image and the target playing time information;
and adding the new sample data into the preset database.
6. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first image;
the extraction module is used for extracting image features of the first image;
the searching module is used for searching whether a second image matched with the first image exists in a preset database, the preset database comprises a plurality of images and image information corresponding to each image, each image is a video frame extracted from a video, and the image information of each image comprises image characteristics and playing time of the image in the video;
and the display module is used for displaying the information of the playing time of the second image in the corresponding video if the second image matched with the first image is searched, and the image characteristics of the first image and the second image are matched with each other.
7. The image processing device of claim 6, wherein the presentation module is further configured to:
after a playing instruction is received, the corresponding video is played from the playing time of the second image in the corresponding video, the playing instruction is a trigger operation on target information, and the target information is information of the playing time of the displayed second image in the corresponding video.
8. The image processing device of claim 6, wherein the presentation module is further configured to:
when the searched video corresponding to the second image is determined to be the video correctly matched with the first image, determining the information of the playing time of the second image in the corresponding video as target playing time information;
generating new sample data according to the first image, the image characteristics of the first image and the target playing time information;
and adding the new sample data into the preset database.
9. A computer-readable storage medium, on which a computer program is stored, which, when executed on a computer, causes the computer to carry out the method according to any one of claims 1 to 5.
10. An electronic device comprising a memory, a processor, wherein the processor executes the method of any one of claims 1 to 5 by invoking a computer program stored in the memory.
CN202010365488.3A 2020-04-30 2020-04-30 Image processing method, image processing device, storage medium and electronic equipment Pending CN111475677A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010365488.3A CN111475677A (en) 2020-04-30 2020-04-30 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010365488.3A CN111475677A (en) 2020-04-30 2020-04-30 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111475677A true CN111475677A (en) 2020-07-31

Family

ID=71764354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010365488.3A Pending CN111475677A (en) 2020-04-30 2020-04-30 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111475677A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114741553A (en) * 2022-03-31 2022-07-12 慧之安信息技术股份有限公司 Image feature-based picture searching method
WO2022237107A1 (en) * 2021-05-14 2022-11-17 上海擎感智能科技有限公司 Video searching method and system, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871001A (en) * 2017-11-07 2018-04-03 广东欧珀移动通信有限公司 Audio frequency playing method, device, storage medium and electronic equipment
CN108259974A (en) * 2018-03-07 2018-07-06 优酷网络技术(北京)有限公司 Video matching method and device
CN110019933A (en) * 2018-01-02 2019-07-16 阿里巴巴集团控股有限公司 Video data handling procedure, device, electronic equipment and storage medium
CN110225387A (en) * 2019-05-20 2019-09-10 北京奇艺世纪科技有限公司 A kind of information search method, device and electronic equipment
CN110909209A (en) * 2019-11-26 2020-03-24 北京达佳互联信息技术有限公司 Live video searching method and device, equipment, server and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871001A (en) * 2017-11-07 2018-04-03 广东欧珀移动通信有限公司 Audio frequency playing method, device, storage medium and electronic equipment
CN110019933A (en) * 2018-01-02 2019-07-16 阿里巴巴集团控股有限公司 Video data handling procedure, device, electronic equipment and storage medium
CN108259974A (en) * 2018-03-07 2018-07-06 优酷网络技术(北京)有限公司 Video matching method and device
CN110225387A (en) * 2019-05-20 2019-09-10 北京奇艺世纪科技有限公司 A kind of information search method, device and electronic equipment
CN110909209A (en) * 2019-11-26 2020-03-24 北京达佳互联信息技术有限公司 Live video searching method and device, equipment, server and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘嵩主编: "《数字图像处理原理、实现方法及实践探究》", 31 October 2017, 哈尔滨:哈尔滨工业大学出版社, pages: 220 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237107A1 (en) * 2021-05-14 2022-11-17 上海擎感智能科技有限公司 Video searching method and system, electronic device, and storage medium
CN114741553A (en) * 2022-03-31 2022-07-12 慧之安信息技术股份有限公司 Image feature-based picture searching method

Similar Documents

Publication Publication Date Title
US20210012761A1 (en) Method for selecting background music and capturing video, device, terminal apparatus, and medium
US20230012732A1 (en) Video data processing method and apparatus, device, and medium
WO2019134516A1 (en) Method and device for generating panoramic image, storage medium, and electronic apparatus
CN111046235B (en) Method, system, equipment and medium for searching acoustic image archive based on face recognition
US20230274471A1 (en) Virtual object display method, storage medium and electronic device
CN107835366B (en) Multimedia playing method, device, storage medium and electronic equipment
CN107229741B (en) Information searching method, device, equipment and storage medium
CN110909209B (en) Live video searching method and device, equipment, server and storage medium
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
CN107731020B (en) Multimedia playing method, device, storage medium and electronic equipment
CN109600559B (en) Video special effect adding method and device, terminal equipment and storage medium
CN111241340A (en) Video tag determination method, device, terminal and storage medium
US11556605B2 (en) Search method, device and storage medium
CN111475677A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111432282A (en) Video recommendation method and device
CN114095742A (en) Video recommendation method and device, computer equipment and storage medium
CN110717452B (en) Image recognition method, device, terminal and computer readable storage medium
CN110582016A (en) video information display method, device, server and storage medium
CN108537149B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108052506B (en) Natural language processing method, device, storage medium and electronic equipment
CN114095793A (en) Video playing method and device, computer equipment and storage medium
WO2021196551A1 (en) Image retrieval method and apparatus, computer device, and storage medium
CN112887615A (en) Shooting method and device
US20220084314A1 (en) Method for obtaining multi-dimensional information by picture-based integration and related device
CN112532904B (en) Video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination