CN114741553B - Image feature-based picture searching method - Google Patents

Image feature-based picture searching method Download PDF

Info

Publication number
CN114741553B
CN114741553B CN202210337862.8A CN202210337862A CN114741553B CN 114741553 B CN114741553 B CN 114741553B CN 202210337862 A CN202210337862 A CN 202210337862A CN 114741553 B CN114741553 B CN 114741553B
Authority
CN
China
Prior art keywords
picture
video
image
picture element
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210337862.8A
Other languages
Chinese (zh)
Other versions
CN114741553A (en
Inventor
余丹
兰雨晴
黄永琢
王丹星
唐霆岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Standard Intelligent Security Technology Co Ltd
Original Assignee
China Standard Intelligent Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Standard Intelligent Security Technology Co Ltd filed Critical China Standard Intelligent Security Technology Co Ltd
Priority to CN202210337862.8A priority Critical patent/CN114741553B/en
Publication of CN114741553A publication Critical patent/CN114741553A/en
Application granted granted Critical
Publication of CN114741553B publication Critical patent/CN114741553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/785Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/7854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using shape
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image feature-based picture searching method, which comprises the steps of carrying out image feature extraction processing on a video of a video database to obtain image picture element features, and carrying out segmentation and identification processing on a picture to be processed from a terminal device to obtain the picture element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the terminal equipment or not so as to extract the matched video picture frame.

Description

Image feature-based image searching method
Technical Field
The invention relates to the technical field of image data processing, in particular to an image searching method based on image characteristics.
Background
At present, in image searching, the similarity between two images is calculated, and then whether the image is a target image to be searched is determined according to the calculated similarity. The above-mentioned manner of searching for a picture with a picture needs to calculate the similarity of the whole picture area of the picture, which not only requires to set a large-capacity database to store the picture, but also requires a high time complexity and a large amount of calculation for calculating the similarity of the whole picture area, which seriously affects the efficiency and accuracy of picture searching, and cannot realize rapid picture searching.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an image feature-based image searching method, which comprises the steps of carrying out image feature extraction processing on a video of a video database to obtain image picture element features, and carrying out segmentation and identification processing on a picture to be processed from a terminal device to obtain the image element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
The invention provides an image feature-based picture searching method, which comprises the following steps:
the method comprises the following steps that S1, image feature extraction processing is carried out on videos from a video database to obtain image picture element features contained in the videos, and the image picture element features are stored in a block chain in a grouping mode;
s2, segmenting and identifying a picture to be processed from the terminal equipment, thereby extracting picture element characteristics contained in the picture to be processed; comparing the picture element characteristics with image picture element characteristics in a block chain, and determining image picture element characteristics matched with the picture element characteristics;
s3, calibrating and extracting a video picture frame matched with the picture to be processed from the video according to the image picture element characteristics determined by matching; carrying out image picture element feature marking processing on the video picture frame;
and S4, returning the video picture frame subjected to the marking processing to the terminal equipment, and deleting the related processing record information of the picture to be processed.
Further, in step S1, performing image feature extraction processing on a video from a video database, and obtaining image picture element features included in the video specifically includes:
video picture frame extraction processing is carried out on videos in a video database, and all video picture frames contained in the videos are obtained;
after pixel interpolation restoration processing and pixel graying conversion processing are carried out on each video picture frame, character element characteristics and symbol element characteristics contained in an image picture are extracted from each video picture frame and are used as the image picture element characteristics; wherein the character element features or the symbol element features include shape features and color features of characters or symbols existing in the image picture.
Further, in step S1, the pixel interpolation restoration processing on each video picture frame includes performing peaceful and optimal pixel interpolation restoration by using the color saturation of surrounding pixels, and the specific process includes:
step S101, obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the following formula (1),
Figure BDA0003575056660000021
in the above-mentioned formula (1),
Figure BDA0003575056660000022
representing a color comprehensive saturation RGB value around the ith row and jth column pixel point of the video picture frame; (a, b) E [ D (i), D (j) ]]Representing the pixel points of the ith row and the jth column around the pixel points of the ith row and the jth column of the video picture frame; (R, G, B) (a, B) representing RGB values of the a-th row and B-th column pixel points of the video picture frame; n { (a, b) ∈ [ D (i), D (j) ]]Expressing the total number of pixel points surrounding the jth row and jth column of pixel points of the ith row of the video picture frame;
step S102, obtaining the interpolation repair rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the following formula (2),
Figure BDA0003575056660000031
in the above formula (2), W R (i, j) representing the interpolation repair rate of the R value in the RGB values of the ith row and jth column pixel points of the video picture frame; w is a group of G (i, j) representing the interpolation repair rate of the G value in the RGB values of the ith row and jth column pixel points of the video picture frame; w is a group of B (i, j) representing the interpolation repair rate of the B value in the RGB values of the ith row and jth column pixel points of the video picture frame;
Figure BDA0003575056660000032
to represent
Figure BDA0003575056660000033
R value of (1); />
Figure BDA0003575056660000034
Represents->
Figure BDA0003575056660000035
G value of (1); />
Figure BDA0003575056660000036
To represent
Figure BDA0003575056660000037
B value of (1); max (a,b)∈[D(i),D(j)] [R(a,b)]The maximum value of R values in pixel points surrounding the ith row and the jth column of pixel points of the video picture frame is calculated; max (a,b)∈[D(i),D(j)] [G(a,b)]The maximum value of the G value in the surrounding pixel points of the ith row and the jth column of the pixel points surrounding the video picture frame is calculated; max (a,b)∈[D(i),D(j)] [B(a,b)]The maximum value of the B value in the pixel points surrounding the ith row and the jth column of the video picture frame is calculated;
step S103, utilizing the following formula (3), carrying out interpolation restoration on the pixel points according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point,
Figure BDA0003575056660000038
in the above-mentioned formula (3),
Figure BDA0003575056660000039
and representing the RGB value obtained by performing interpolation restoration on the ith row and jth column pixel points of the video picture frame.
Further, in step S1, storing the image picture element feature groups into a block chain specifically includes:
all video picture frames contained in the video are uniquely numbered, and the image picture element characteristics are added with the unique numbers of the video picture frames to which the image picture element characteristics belong; and then all image picture element characteristics belonging to the same video picture frame are stored in a block chain in a grouping mode.
Further, in step S2, the segmenting and identifying the to-be-processed picture from the terminal device, so as to extract and obtain picture element features included in the to-be-processed picture specifically includes:
acquiring picture profile distribution information of a picture to be processed from terminal equipment, and dividing the picture to be processed into a plurality of picture units according to the picture profile distribution information;
and carrying out shape and color recognition processing on each picture unit to obtain the shape characteristic and the color characteristic of each picture unit as the picture element characteristic.
Further, in step S2, the comparing process is performed on the picture element features and the image picture element features in the block chain, and the determining of the image picture element features matched with the picture element features specifically includes:
comparing the picture element characteristics with image picture element characteristics in a block chain, and determining respective similarity values of the picture element characteristics and the image picture element characteristics on shape elements and color elements; if the similarity values of the shape element and the color element are larger than or equal to a preset similarity threshold, determining that the picture element characteristics are matched with the current image picture element characteristics; otherwise, determining that the picture element characteristics do not match the current image picture element characteristics.
Further, in step S3, calibrating and extracting a video picture frame matched with the to-be-processed picture from the video according to the image picture element characteristics determined by the matching specifically includes:
extracting the uniqueness number of the video picture frame to which the matched image picture element feature belongs from the image picture element feature matched with the picture element feature;
according to the uniqueness number, the time axis position of the video picture frame to which the matched image picture element characteristics belong in the video;
and extracting a video picture frame matched with the picture to be processed from the video according to the position of the time axis.
Further, in step S3, the processing of image element feature labeling on the video picture frame specifically includes:
and carrying out element feature edge contour line drawing and marking processing on all image picture element features contained in the extracted video picture frame.
Further, in step S4, returning the video picture frame that has been marked to the terminal device, and deleting the related processing record information of the picture to be processed specifically includes:
and returning the video picture frame subjected to the marking processing to the terminal equipment after the fidelity compression processing is carried out on the video picture frame, and simultaneously deleting the picture data of the picture to be processed and the related cache data for carrying out segmentation and identification processing on the picture to be processed.
Compared with the prior art, the image feature-based image searching method performs image feature extraction processing on the video of the video database to obtain image picture element features, and performs segmentation and identification processing on the picture to be processed from the terminal equipment to obtain the image element features; comparing the picture element characteristics with the image picture element characteristics to determine the image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image feature-based image searching method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of an image feature-based image searching method according to an embodiment of the present invention. The image feature-based picture searching method comprises the following steps:
step S1, carrying out image feature extraction processing on a video from a video database to obtain image picture element features contained in the video, and storing the image picture element features into a block chain in a grouping manner;
s2, segmenting and identifying the picture to be processed from the terminal equipment, thereby extracting and obtaining picture element characteristics contained in the picture to be processed; comparing the picture element characteristics with the image picture element characteristics in the block chain, and determining the image picture element characteristics matched with the picture element characteristics;
s3, according to the image picture element characteristics determined by matching, video picture frames matched with the picture to be processed are extracted from the video in a calibration mode; carrying out image picture element feature marking processing on the video picture frame;
and S4, returning the video picture frame which is marked to the terminal equipment, and deleting the related processing record information of the picture to be processed.
The beneficial effects of the above technical scheme are: the image feature-based picture searching method carries out image feature extraction processing on videos in a video database to obtain image picture element features, and carries out segmentation and identification processing on pictures to be processed from terminal equipment to obtain picture element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
Preferably, in step S1, performing image feature extraction processing on a video from a video database, and obtaining image picture element features included in the video specifically includes:
performing video picture frame extraction processing on a video in a video database to obtain all video picture frames contained in the video;
after pixel interpolation restoration processing and pixel graying conversion processing are carried out on each video picture frame, character element characteristics and symbol element characteristics contained in an image picture are extracted from each video picture frame and are used as the image picture element characteristics; wherein, the character element feature or the symbol element feature comprises a shape feature and a color feature of a character or a symbol existing in the image picture.
The beneficial effects of the above technical scheme are: the method comprises the steps that massive videos exist in a video database, video picture frames of each video are extracted according to a preset time interval, and all video pictures contained in each video are obtained; and then, the shapes and colors of the human/object objects and the symbol objects existing in each video picture frame are identified and extracted, and the shapes and colors of the human/object objects or the symbol objects existing in each video picture frame are independently used as image picture element characteristics, so that the subsequent picture search from the aspect of two image characteristics of the shapes and the colors is facilitated, and the accuracy and the reliability of the picture search are ensured.
Preferably, in step S1, the pixel interpolation repair processing performed on each video picture frame includes performing peaceful and optimal pixel interpolation repair by using the color saturation of surrounding pixels, and the specific process includes:
step S101, obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the following formula (1),
Figure BDA0003575056660000081
in the above-mentioned formula (1),
Figure BDA0003575056660000082
representing a color comprehensive saturation RGB value around the ith row and jth column pixel point of the video picture frame; (a, b) E [ D (i), D (j) ]]Representing the pixel points of the ith row and the jth column around the pixel points of the ith row and the jth column of the video picture frame; (R, G, B) (a, B) representing RGB values of the a-th row and B-th column pixel points of the video picture frame; n { (a, b) ∈ [ D (i), D (j) ]]Expressing the total number of pixel points surrounding the jth row and jth column of pixel points of the ith row of the video picture frame;
step S102, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the following formula (2),
Figure BDA0003575056660000083
in the above formula (2), W R (i, j) representing the interpolation repair rate of the R value in the RGB values of the ith row and jth column pixel points of the video picture frame; w G (i, j) representing the interpolation repair rate of a G value in RGB values of pixel points on the ith row and the jth column of the video picture frame; w B (i, j) representing the interpolation repair rate of the B value in the RGB values of the ith row and jth column pixel points of the video picture frame;
Figure BDA0003575056660000084
to represent
Figure BDA0003575056660000085
R value of (1); />
Figure BDA0003575056660000086
Represents->
Figure BDA0003575056660000087
G value of (1); />
Figure BDA0003575056660000088
To represent
Figure BDA0003575056660000089
B value of (1); max (a,b)∈[D(i),D(j)] [R(a,b)]The maximum value of R values in pixel points surrounding the ith row and jth column of pixel points of a video picture frame is obtained through expression; max (a,b)∈[D(i),D(j)] [G(a,b)]The maximum value of the G value in the surrounding pixel points of the ith row and the jth column of the pixel points surrounding the video picture frame is calculated; max of (a,b)∈[D(i),D(j)] [B(a,b)]The maximum value of the B value in the pixel points surrounding the ith row and the jth column of the video picture frame is calculated; />
Step S103, using the following formula (3), interpolating and repairing the pixel point according to the interpolation repairing rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point,
Figure BDA00035750566600000810
in the above-mentioned formula (3),
Figure BDA0003575056660000091
and representing the RGB value obtained after interpolation restoration is carried out on the ith row and jth column pixel points of the video picture frame.
The beneficial effects of the above technical scheme are: obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the formula (1), and further knowing the average condition of the color values of the pixels around each pixel point, so that the subsequent targeted repair of the pixel points is facilitated; then, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the formula (2), thereby ensuring that the connection between each pixel point and the surrounding pixel points is smooth and not abrupt after being restored; and finally, carrying out interpolation restoration on the pixel points according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point by using the formula (3), so that the color of each pixel point after restoration is more saturated, and the feature points in the picture are easily extracted.
Preferably, in step S1, storing the image picture element feature groups into the block chain specifically includes:
all video picture frames contained in the video are uniquely numbered, and the image element characteristics are added with the unique numbers of the video picture frames to which the image element characteristics belong; and then all image picture element characteristics belonging to the same video picture frame are stored in a block chain in a grouping mode.
The beneficial effects of the above technical scheme are: the appearance time sequence position of each video picture frame in the video is unique, and the time shaft time point of each video picture frame in the video is used as a unique number, so that code marking for uniquely marking each video picture frame in the video can be performed. And the uniqueness number of the video picture frame to which the image picture element feature belongs is added to the image picture element feature, so that all the image picture element features can be identified and distinguished, the subsequent grouping and marking of the image picture element features are facilitated, and the accuracy of searching and positioning the image picture element features is improved.
Preferably, in step S2, the segmenting and recognizing the to-be-processed picture from the terminal device, so as to extract picture element features included in the to-be-processed picture specifically includes:
acquiring picture profile distribution information of a picture to be processed from terminal equipment, and dividing the picture to be processed into a plurality of picture units according to the picture profile distribution information;
and carrying out shape and color recognition processing on each picture unit to obtain the shape characteristic and the color characteristic of each picture unit as the picture element characteristic.
The beneficial effects of the above technical scheme are: according to the picture contour line of the picture to be processed from the terminal equipment, the picture to be processed is divided into a plurality of picture units, so that people, objects and symbol objects contained in the picture to be processed can be accurately divided and extracted. And then, carrying out shape and color recognition processing on each picture unit so as to obtain the shape characteristic and the color characteristic of each picture unit, so that the picture units can be characterized in terms of two image characteristics, namely shape and color.
Preferably, in step S2, the comparing process is performed on the picture element feature and an image picture element feature in the block chain, and the determining of the image picture element feature matched with the picture element feature specifically includes:
comparing the picture element characteristics with image picture element characteristics in a block chain, and determining respective similarity values of the picture element characteristics and the image picture element characteristics on shape elements and color elements; if the similarity values of the shape element and the color element are larger than or equal to a preset similarity threshold, determining that the picture element feature is matched with the current image picture element feature; otherwise, determining that the picture element feature does not match the current image picture element feature.
The beneficial effects of the above technical scheme are: the picture element characteristics and the image picture element characteristics are compared with each other by the shape elements and the color elements, so that the successfully matched picture element characteristics and the successfully matched image picture element characteristics are consistent in both the shape and the color.
Preferably, in step S3, according to the image picture element characteristics determined by the matching, the step of extracting a video picture frame matched with the picture to be processed from the video by calibration specifically includes:
extracting the uniqueness number of the video picture frame to which the matched image picture element feature belongs from the image picture element feature matched with the picture element feature;
according to the uniqueness number, the time axis position of the video picture frame to which the matched image picture element characteristics belong in the video;
and extracting the video picture frame matched with the picture to be processed from the video according to the time axis position.
The beneficial effects of the above technical scheme are: the uniqueness number is the time axis time point of each video picture frame in the video, so that the video picture frame matched with the picture to be processed can be quickly and accurately extracted from the video according to the uniqueness number, the video picture frames required by a large amount of video picture frames are avoided being required one by one, and the workload of searching the video picture frames is reduced.
Preferably, in step S3, the processing of image picture element feature marking on the video picture frame specifically includes:
and performing element feature edge contour line drawing marking processing on all the image picture element features contained in the extracted video picture frame.
The beneficial effects of the above technical scheme are: and performing element feature edge contour line delineation marking processing on all image picture element features contained in the extracted video picture frame, wherein the element feature edge contour line delineation marking processing can include but is not limited to thickening processing on edge contour lines of element features in pictures of the video picture frame, so that the visual brightness of image picture elements can be improved.
Preferably, in step S4, returning the video picture frame with the mark processing completed to the terminal device, and deleting the related processing record information of the picture to be processed specifically includes:
and returning the video picture frame subjected to the marking processing to the terminal equipment after the fidelity compression processing is carried out on the video picture frame, and simultaneously deleting the picture data of the picture to be processed and the related cache data for carrying out segmentation and identification processing on the picture to be processed.
The beneficial effects of the above technical scheme are: and the video picture frame after the marking processing is subjected to fidelity compression processing and then returns to the terminal equipment, so that the transmission efficiency of the video picture frame can be improved, and the distortion in the transmission process can be avoided. In addition, the picture data of the picture to be processed and the related cache data for segmenting and identifying the picture to be processed are deleted, so that the picture to be processed can be prevented from being stolen and tampered, and the data security of picture processing is improved.
As can be seen from the content of the above embodiment, the image feature-based picture search method performs image feature extraction processing on a video in a video database to obtain image picture element features, and performs segmentation and identification processing on a picture to be processed from a terminal device to obtain picture element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (7)

1. The image feature-based picture searching method is characterized by comprising the following steps of:
the method comprises the following steps that S1, image feature extraction processing is carried out on videos from a video database to obtain image picture element features contained in the videos, and the image picture element features are stored in a block chain in a grouping mode;
s2, segmenting and identifying a picture to be processed from the terminal equipment, thereby extracting picture element characteristics contained in the picture to be processed; comparing the picture element characteristics with image picture element characteristics in a block chain, and determining image picture element characteristics matched with the picture element characteristics;
s3, calibrating and extracting a video picture frame matched with the picture to be processed from the video according to the image picture element characteristics determined by matching; carrying out image picture element feature marking processing on the video picture frame;
s4, returning the video picture frame subjected to the marking processing to the terminal equipment, and deleting the related processing record information of the picture to be processed;
in step S1, performing image feature extraction processing on a video from a video database to obtain image picture element features included in the video specifically includes:
video picture frame extraction processing is carried out on videos in a video database, and all video picture frames contained in the videos are obtained;
after pixel interpolation restoration processing and pixel graying conversion processing are carried out on each video picture frame, character element characteristics and symbol element characteristics contained in an image picture are extracted from each video picture frame and are used as the image picture element characteristics; wherein the character element features or the symbol element features comprise shape features and color features of characters or symbols existing in the image picture;
in step S1, the pixel interpolation restoration processing on each video picture frame includes performing peaceful and optimal pixel interpolation restoration by using the color saturation of surrounding pixels, and the specific process includes:
step S101, obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the following formula (1),
Figure FDA0003877442070000021
in the above-mentioned formula (1),
Figure FDA0003877442070000022
display viewColor comprehensive saturation RGB values around the ith row and jth column pixel points of the video picture frame; (a, b) E [ D (i), D (j) ]]Representing the pixel points of the ith row and the jth column around the pixel points of the ith row and the jth column of the video picture frame; (R, G, B) (a, B) representing RGB values of pixels on a row a and a column B of a video picture frame; n { (a, b) ∈ [ D (i), D (j) ]]Expressing the total number of pixel points surrounding the jth row and jth column of pixel points of the ith row of the video picture frame;
step S102, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the following formula (2),
Figure FDA0003877442070000023
in the above formula (2), W R (i, j) representing the interpolation repair rate of the R value in the RGB values of the ith row and jth column pixel points of the video picture frame; w G (i, j) representing the interpolation repair rate of the G value in the RGB values of the ith row and jth column pixel points of the video picture frame; w is a group of B (i, j) representing the interpolation repair rate of the B value in the RGB values of the ith row and jth column pixel points of the video picture frame;
Figure FDA0003877442070000024
to represent
Figure FDA0003877442070000025
R value of (1);
Figure FDA0003877442070000026
to represent
Figure FDA0003877442070000027
G value of (1);
Figure FDA0003877442070000028
to represent
Figure FDA0003877442070000029
B value of (1); max (a,b)∈[D(i),D(j)] [R(a,b)]The maximum value of R values in pixel points surrounding the ith row and the jth column of pixel points of the video picture frame is calculated; max (a,b)∈[D(i),D(j)] [G(a,b)]The maximum value of the G value in the surrounding pixel points of the ith row and the jth column of the pixel points surrounding the video picture frame is calculated; max (a,b)∈[D(i),D(j)] [B(a,b)]The maximum value of the B value in the pixel points surrounding the ith row and the jth column of the video picture frame is calculated;
step S103, utilizing the following formula (3), carrying out interpolation restoration on the pixel points according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point,
Figure FDA0003877442070000031
in the above-mentioned formula (3),
Figure FDA0003877442070000032
and representing the RGB value obtained after interpolation restoration is carried out on the ith row and jth column pixel points of the video picture frame.
2. The image feature-based picture searching method of claim 1, wherein:
in step S1, storing the image picture element feature groups into a block chain specifically includes:
all video picture frames contained in the video are uniquely numbered, and the image picture element characteristics are added with the unique numbers of the video picture frames to which the image picture element characteristics belong; and then all image picture element characteristics belonging to the same video picture frame are stored in a block chain in a grouping mode.
3. The image feature-based picture searching method of claim 2, wherein:
in step S2, segmenting and recognizing the to-be-processed picture from the terminal device, so as to extract picture element features included in the to-be-processed picture, specifically including:
acquiring picture profile distribution information of a picture to be processed from terminal equipment, and dividing the picture to be processed into a plurality of picture units according to the picture profile distribution information;
and carrying out shape and color recognition processing on each picture unit to obtain the shape characteristic and the color characteristic of each picture unit as the picture element characteristic.
4. The image feature-based picture searching method of claim 3, wherein:
in step S2, the comparing process is performed on the picture element features and the image picture element features in the block chain, and the determining of the image picture element features matched with the picture element features specifically includes:
comparing the picture element characteristics with image picture element characteristics in a block chain, and determining respective similarity values of the picture element characteristics and the image picture element characteristics on shape elements and color elements; if the similarity values of the shape elements and the color elements are larger than or equal to a preset similarity threshold value, determining that the picture element characteristics are matched with the current image element characteristics; otherwise, determining that the picture element characteristics do not match the current image picture element characteristics.
5. The image feature-based picture searching method of claim 4, wherein:
in the step S3, calibrating and extracting a video picture frame matched with the picture to be processed from the video according to the image picture element characteristics determined by the matching specifically includes:
extracting the uniqueness number of the video picture frame to which the matched image picture element feature belongs from the image picture element feature matched with the picture element feature;
according to the uniqueness number, the time axis position of the video picture frame to which the matched image picture element characteristics belong in the video;
and extracting a video picture frame matched with the picture to be processed from the video according to the position of the time axis.
6. The image feature-based picture searching method of claim 5, wherein:
in step S3, the image picture element feature labeling processing on the video picture frame specifically includes:
and carrying out element feature edge contour line drawing and marking processing on all image picture element features contained in the extracted video picture frame.
7. The image feature-based picture searching method of claim 6, wherein:
in step S4, returning the video picture frame that has completed the marking process to the terminal device, and deleting the related processing record information of the picture to be processed specifically includes:
and returning the video picture frame subjected to the marking processing to the terminal equipment after the video picture frame is subjected to the fidelity compression processing, and simultaneously deleting the picture data of the picture to be processed and the related cache data for segmenting and identifying the picture to be processed.
CN202210337862.8A 2022-03-31 2022-03-31 Image feature-based picture searching method Active CN114741553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210337862.8A CN114741553B (en) 2022-03-31 2022-03-31 Image feature-based picture searching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210337862.8A CN114741553B (en) 2022-03-31 2022-03-31 Image feature-based picture searching method

Publications (2)

Publication Number Publication Date
CN114741553A CN114741553A (en) 2022-07-12
CN114741553B true CN114741553B (en) 2023-03-24

Family

ID=82279148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210337862.8A Active CN114741553B (en) 2022-03-31 2022-03-31 Image feature-based picture searching method

Country Status (1)

Country Link
CN (1) CN114741553B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115734045A (en) * 2022-11-15 2023-03-03 深圳市东明炬创电子股份有限公司 Video playing method, device, equipment and storage medium
CN117354525B (en) * 2023-12-05 2024-03-15 深圳市旭景数字技术有限公司 Video coding method and system for realizing efficient storage and transmission of digital media

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198110A (en) * 2013-03-28 2013-07-10 广州中国科学院软件应用技术研究所 Method and system for rapid video data characteristic retrieval
CN103336776A (en) * 2013-05-13 2013-10-02 云南瑞攀科技有限公司 Image searching method based on image content
CN104133899B (en) * 2014-08-01 2017-10-13 百度在线网络技术(北京)有限公司 The generation method and device in picture searching storehouse, image searching method and device
CN106610987B (en) * 2015-10-22 2020-02-21 杭州海康威视数字技术股份有限公司 Video image retrieval method, device and system
CN106294690A (en) * 2016-08-05 2017-01-04 广东云海云计算科技有限公司 Image/video search platform based on content
CN107807979A (en) * 2017-10-27 2018-03-16 朱秋华 The searching method and device of a kind of similar pictures
CN109408652B (en) * 2018-09-30 2021-03-23 北京搜狗科技发展有限公司 Picture searching method, device and equipment
CN111475677A (en) * 2020-04-30 2020-07-31 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112232203B (en) * 2020-10-15 2024-05-28 平安科技(深圳)有限公司 Pedestrian recognition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114741553A (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN114741553B (en) Image feature-based picture searching method
WO2016065701A1 (en) Image text recognition method and device
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN109886978B (en) End-to-end alarm information identification method based on deep learning
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN111242027A (en) Unsupervised learning scene feature rapid extraction method fusing semantic information
CN114067444A (en) Face spoofing detection method and system based on meta-pseudo label and illumination invariant feature
CN114782770A (en) License plate detection and recognition method and system based on deep learning
CN111507138A (en) Image recognition method and device, computer equipment and storage medium
CN111814576A (en) Shopping receipt picture identification method based on deep learning
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
CN116189162A (en) Ship plate detection and identification method and device, electronic equipment and storage medium
CN116434096A (en) Spatiotemporal motion detection method and device, electronic equipment and storage medium
CN111079749B (en) End-to-end commodity price tag character recognition method and system with gesture correction
CN110991434B (en) Self-service terminal certificate identification method and device
CN110135274B (en) Face recognition-based people flow statistics method
CN112598013A (en) Computer vision processing method based on neural network
CN109508408B (en) Video retrieval method based on frame density and computer readable storage medium
CN111832497A (en) Text detection post-processing method based on geometric features
CN108734158B (en) Real-time train number identification method and device
Lafuente-Arroyo et al. Traffic sign classification invariant to rotations using support vector machines
CN115359393A (en) Image screen-splash abnormity identification method based on weak supervision learning
CN115731257A (en) Leaf form information extraction method based on image
CN111612800B (en) Ship image retrieval method, computer-readable storage medium and equipment
Hong et al. Saliency-based feature learning for no-reference image quality assessment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant