CN114741553A - Image feature-based picture searching method - Google Patents

Image feature-based picture searching method Download PDF

Info

Publication number
CN114741553A
CN114741553A CN202210337862.8A CN202210337862A CN114741553A CN 114741553 A CN114741553 A CN 114741553A CN 202210337862 A CN202210337862 A CN 202210337862A CN 114741553 A CN114741553 A CN 114741553A
Authority
CN
China
Prior art keywords
picture
video
image
picture frame
picture element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210337862.8A
Other languages
Chinese (zh)
Other versions
CN114741553B (en
Inventor
余丹
兰雨晴
黄永琢
王丹星
唐霆岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Standard Intelligent Security Technology Co Ltd
Original Assignee
China Standard Intelligent Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Standard Intelligent Security Technology Co Ltd filed Critical China Standard Intelligent Security Technology Co Ltd
Priority to CN202210337862.8A priority Critical patent/CN114741553B/en
Publication of CN114741553A publication Critical patent/CN114741553A/en
Application granted granted Critical
Publication of CN114741553B publication Critical patent/CN114741553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/785Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using colour or luminescence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/7854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using shape
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image feature-based picture searching method, which comprises the steps of carrying out image feature extraction processing on a video of a video database to obtain image picture element features, and carrying out segmentation and identification processing on a picture to be processed from a terminal device to obtain the picture element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the terminal equipment or not so as to extract the matched video picture frame.

Description

Image feature-based picture searching method
Technical Field
The invention relates to the technical field of image data processing, in particular to an image searching method based on image characteristics.
Background
At present, in image searching, the similarity between two images is calculated, and then whether the image is a target image to be searched is determined according to the calculated similarity. In the above manner of searching pictures, the similarity of the whole picture region of a picture needs to be calculated, which not only requires setting a large-capacity database to store the picture, but also requires a high time complexity and a large amount of calculation for calculating the similarity of the whole picture region, which seriously affects the efficiency and accuracy of picture search and cannot realize rapid picture search.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an image searching method based on image characteristics, which comprises the steps of carrying out image characteristic extraction processing on a video of a video database to obtain image picture element characteristics, and carrying out segmentation and identification processing on a picture to be processed from a terminal device to obtain the image element characteristics; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing the element characteristics contained in the picture to determine whether the picture is matched with the element characteristics contained in the picture so as to extract the matched video picture frame.
The invention provides an image feature-based picture searching method, which comprises the following steps:
step S1, carrying out image feature extraction processing on the video from a video database to obtain the image picture element features contained in the video, and storing the image picture element features into a block chain in a grouping manner;
step S2, carrying out segmentation and identification processing on the picture to be processed from the terminal equipment, thereby extracting and obtaining picture element characteristics contained in the picture to be processed; comparing the picture element characteristics with image picture element characteristics in a block chain, and determining image picture element characteristics matched with the picture element characteristics;
step S3, according to the image picture element characteristics determined by the matching, calibrating and extracting the video picture frame matched with the picture to be processed from the video; carrying out image picture element feature marking processing on the video picture frame;
and step S4, returning the video picture frame after the marking processing to the terminal equipment, and deleting the related processing record information of the picture to be processed.
Further, in step S1, performing image feature extraction processing on the video from the video database, and obtaining image picture element features included in the video specifically includes:
performing video picture frame extraction processing on a video in a video database to obtain all video picture frames contained in the video;
after pixel interpolation restoration processing and pixel graying conversion processing are carried out on each video picture frame, character element characteristics and symbol element characteristics contained in an image picture are extracted from each video picture frame and are used as the image picture element characteristics; wherein the character element features or the symbol element features comprise shape features and color features of characters or symbols existing in the image picture.
Further, in step S1, the pixel interpolation restoration processing performed on each video picture frame includes performing a peacefully optimized pixel interpolation restoration by using the color saturation of the surrounding pixels, and the specific process includes:
step S101, obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the following formula (1),
Figure BDA0003575056660000021
in the above-mentioned formula (1),
Figure BDA0003575056660000022
representing a color comprehensive saturation RGB value around the ith row and jth column pixel point of the video picture frame; (a, b) e [ D (i), D (j)]Representing the pixel points of the ith row and the jth column around the pixel points of the ith row and the jth column of the video picture frame; (R, G, B) (a, B) representing RGB values of the a-th row and B-th column pixel points of the video picture frame; n { (a, b) ∈ [ D (i), D (j)]Expressing the total number of pixel points surrounding the jth row and jth column of pixel points of the ith row of the video picture frame;
step S102, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the following formula (2),
Figure BDA0003575056660000031
in the above formula (2), WR(i, j) representing the interpolation repair rate of the R value in the RGB values of the ith row and jth column pixel points of the video picture frame; wG(i, j) representing the interpolation repair rate of the G value in the RGB values of the ith row and jth column pixel points of the video picture frame; wB(i, j) representing the interpolation repair rate of the B value in the RGB values of the ith row and jth column pixel points of the video picture frame;
Figure BDA0003575056660000032
to represent
Figure BDA0003575056660000033
R value of (1);
Figure BDA0003575056660000034
to represent
Figure BDA0003575056660000035
G in (1)A value;
Figure BDA0003575056660000036
to represent
Figure BDA0003575056660000037
B value of (1); max(a,b)∈[D(i),D(j)][R(a,b)]The maximum value of R values in pixel points surrounding the ith row and jth column of pixel points of a video picture frame is obtained through expression; max(a,b)∈[D(i),D(j)][G(a,b)]The maximum value of the G value in the surrounding pixel points of the ith row and the jth column of the pixel points surrounding the video picture frame is calculated; max(a,b)∈[D(i),D(j)][B(a,b)]The maximum value of the B value in the pixel points surrounding the ith row and the jth column of the video picture frame is calculated;
step S103, utilizing the following formula (3), carrying out interpolation restoration on the pixel points according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point,
Figure BDA0003575056660000038
in the above-mentioned formula (3),
Figure BDA0003575056660000039
and representing the RGB value obtained after interpolation restoration is carried out on the ith row and jth column pixel points of the video picture frame.
Further, in step S1, the storing the image picture element feature groups into a block chain specifically includes:
all video picture frames contained in the video are uniquely numbered, and the image picture element characteristics are added with the unique numbers of the video picture frames to which the image picture element characteristics belong; and then all image picture element characteristics belonging to the same video picture frame are stored in a block chain in a grouping mode.
Further, in step S2, the segmenting and recognizing the to-be-processed picture from the terminal device, so that the extracting and obtaining picture element features included in the to-be-processed picture specifically includes:
acquiring picture profile distribution information of a picture to be processed from terminal equipment, and dividing the picture to be processed into a plurality of picture units according to the picture profile distribution information;
and carrying out shape and color recognition processing on each picture unit to obtain the shape characteristic and the color characteristic of each picture unit as the picture element characteristic.
Further, in step S2, the comparing process is performed on the picture element features and the image picture element features in the block chain, and the determining of the image picture element features matched with the picture element features specifically includes:
comparing the picture element characteristics with image picture element characteristics in a block chain, and determining respective similarity values of the picture element characteristics and the image picture element characteristics on shape elements and color elements; if the similarity values of the shape element and the color element are larger than or equal to a preset similarity threshold, determining that the picture element characteristics are matched with the current image picture element characteristics; otherwise, determining that the picture element characteristics do not match the current image picture element characteristics.
Further, in step S3, calibrating and extracting a video picture frame matched with the to-be-processed picture from the video according to the image picture element characteristics determined by the matching specifically includes:
extracting the uniqueness number of the video picture frame to which the matched image picture element feature belongs from the image picture element feature matched with the picture element feature;
according to the uniqueness number, the time axis position of the video picture frame to which the matched image picture element characteristics belong in the video;
and extracting a video picture frame matched with the picture to be processed from the video according to the position of the time axis.
Further, in the step S3, the process of labeling image picture element features of the video picture frame specifically includes:
and carrying out element feature edge contour line drawing and marking processing on all image picture element features contained in the extracted video picture frame.
Further, in step S4, returning the video picture frame with the mark processing completed to the terminal device, and deleting the related processing record information of the picture to be processed specifically includes:
and returning the video picture frame subjected to the marking processing to the terminal equipment after the video picture frame is subjected to the fidelity compression processing, and simultaneously deleting the picture data of the picture to be processed and the related cache data for segmenting and identifying the picture to be processed.
Compared with the prior art, the image feature-based image searching method performs image feature extraction processing on the video of the video database to obtain image picture element features, and performs segmentation and identification processing on the picture to be processed from the terminal equipment to obtain the image element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image feature-based image searching method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of an image feature-based image searching method according to an embodiment of the present invention. The image feature-based picture searching method comprises the following steps:
step S1, extracting image characteristics of the video from the video database to obtain the image element characteristics contained in the video, and storing the image element characteristics into a block chain in groups;
step S2, the picture to be processed from the terminal equipment is divided and identified, so as to extract and obtain the picture element characteristics contained in the picture to be processed; comparing the picture element characteristics with the image picture element characteristics in the block chain, and determining the image picture element characteristics matched with the picture element characteristics;
step S3, according to the image picture element characteristics determined by the matching, extracting the video picture frame matched with the picture to be processed from the video; and the video picture frame is subjected to image picture element feature marking processing;
and step 4, returning the video picture frame with the mark processing completed to the terminal equipment, and deleting the relevant processing record information of the picture to be processed.
The beneficial effects of the above technical scheme are: the image feature-based picture searching method comprises the steps of carrying out image feature extraction processing on videos in a video database to obtain image picture element features, and carrying out segmentation and identification processing on pictures to be processed from terminal equipment to obtain picture element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
Preferably, in step S1, the image feature extraction processing is performed on the video from the video database, and obtaining the image picture element features included in the video specifically includes:
performing video picture frame extraction processing on a video in a video database to obtain all video picture frames contained in the video;
after pixel interpolation restoration processing and pixel graying conversion processing are carried out on each video picture frame, character element characteristics and symbol element characteristics contained in an image picture are extracted from each video picture frame and are used as the image picture element characteristics; wherein, the character element feature or the symbol element feature comprises a shape feature and a color feature of a character or a symbol existing in the image picture.
The beneficial effects of the above technical scheme are: the method comprises the steps that massive videos exist in a video database, video picture frames of each video are extracted according to a preset time interval, and all video pictures contained in each video are obtained; and then, the shapes and colors of the human/object objects and the symbol objects existing in each video picture frame are identified and extracted, and the shapes and colors of the human/object objects or the symbol objects existing in each video picture frame are independently used as image picture element characteristics, so that the subsequent picture search from the aspect of two image characteristics of the shapes and the colors is facilitated, and the accuracy and the reliability of the picture search are ensured.
Preferably, in step S1, the pixel interpolation repair processing performed on each video picture frame includes performing a peacefully optimized pixel interpolation repair by using the color saturation of the surrounding pixels, and the specific process includes:
step S101, using the following formula (1), obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame,
Figure BDA0003575056660000081
in the above-mentioned formula (1),
Figure BDA0003575056660000082
representing a color comprehensive saturation RGB value around the ith row and jth column pixel point of the video picture frame; (a, b) e [ D (i), D (j)]Representing the pixel points of the ith row and the jth column around the pixel points of the ith row and the jth column of the video picture frame; (R, G, B) (a, B) representing RGB values of the a-th row and B-th column pixel points of the video picture frame; n { (a, b) ∈ [ D (i), D (j)]Expressing the total number of pixel points surrounding the jth row and jth column of pixel points of the ith row of the video picture frame;
step S102, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the following formula (2),
Figure BDA0003575056660000083
in the above formula (2), WR(i, j) representing the interpolation repair rate of the R value in the RGB values of the ith row and jth column pixel points of the video picture frame; wG(i, j) representing the interpolation repair rate of the G value in the RGB values of the ith row and jth column pixel points of the video picture frame; wB(i, j) representing the interpolation repair rate of the B value in the RGB values of the ith row and jth column pixel points of the video picture frame;
Figure BDA0003575056660000084
to represent
Figure BDA0003575056660000085
R value of (1);
Figure BDA0003575056660000086
represent
Figure BDA0003575056660000087
G value of (1);
Figure BDA0003575056660000088
to represent
Figure BDA0003575056660000089
B value of (1); max of(a,b)∈[D(i),D(j)][R(a,b)]The maximum value of R values in pixel points surrounding the ith row and the jth column of pixel points of the video picture frame is calculated; max(a,b)∈[D(i),D(j)][G(a,b)]The maximum value of the G value in the surrounding pixel points of the ith row and the jth column of the pixel points surrounding the video picture frame is calculated; max(a,b)∈[D(i),D(j)][B(a,b)]The maximum value of the B value in the pixel points surrounding the ith row and the jth column of the video picture frame is calculated;
step S103, using the following formula (3), performing interpolation restoration on each pixel point according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point,
Figure BDA00035750566600000810
in the above-mentioned formula (3),
Figure BDA0003575056660000091
and representing the RGB value obtained after interpolation restoration is carried out on the ith row and jth column pixel points of the video picture frame.
The beneficial effects of the above technical scheme are: obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the formula (1), and further knowing the average condition of the color values of the pixels around each pixel point, so that the subsequent targeted repair of the pixel points is facilitated; then, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the formula (2), thereby ensuring that the connection between each pixel point and the surrounding pixel points is smooth and not abrupt after being restored; and finally, carrying out interpolation restoration on the pixel points according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB values of the corresponding pixel points by using the formula (3), so that the color of each pixel point after restoration is more saturated, and the feature points in the picture are easily extracted.
Preferably, in the step S1, storing the image picture element feature groups into the tile chain specifically includes:
all video picture frames contained in the video are uniquely numbered, and the image picture element characteristics are added with the unique numbers of the video picture frames to which the image picture element characteristics belong; and then all image picture element characteristics belonging to the same video picture frame are stored in a block chain in a grouping mode.
The beneficial effects of the above technical scheme are: the appearance time sequence position of each video picture frame in the video is unique, and the time axis time point of each video picture frame in the video is used as a unique number, so that the code mark for uniquely marking each video picture frame in the video can be performed. And the uniqueness number of the video picture frame to which the image picture element feature belongs is added to the image picture element feature, so that all the image picture element features can be identified and distinguished, the subsequent grouping and marking of the image picture element features are facilitated, and the accuracy of searching and positioning the image picture element features is improved.
Preferably, in step S2, the segmenting and recognizing the to-be-processed picture from the terminal device, so as to extract the picture element features included in the to-be-processed picture specifically includes:
acquiring picture profile distribution information of a picture to be processed from terminal equipment, and dividing the picture to be processed into a plurality of picture units according to the picture profile distribution information;
and carrying out shape and color recognition processing on each picture unit to obtain the shape characteristic and the color characteristic of each picture unit as the picture element characteristic.
The beneficial effects of the above technical scheme are: according to the picture contour line of the picture to be processed from the terminal equipment, the picture to be processed is divided into a plurality of picture units, so that people, objects and symbol objects contained in the picture to be processed can be accurately divided and extracted. And then, carrying out shape and color recognition processing on each picture unit so as to obtain the shape characteristic and the color characteristic of each picture unit, so that the picture units can be characterized in terms of two image characteristics, namely shape and color.
Preferably, in step S2, the comparing the feature of the picture element with the feature of the image picture element in the block chain, and the determining the feature of the image picture element matching the feature of the picture element specifically includes:
comparing the picture element characteristics with image picture element characteristics in a block chain, and determining respective similarity values of the picture element characteristics and the image picture element characteristics on shape elements and color elements; if the similarity values of the shape element and the color element are larger than or equal to a preset similarity threshold, determining that the picture element feature is matched with the current image picture element feature; otherwise, it is determined that the picture element characteristic does not match the current image picture element characteristic.
The beneficial effects of the above technical scheme are: the picture element characteristics and the image picture element characteristics are compared with each other by the shape elements and the color elements, so that the successfully matched picture element characteristics and the successfully matched image picture element characteristics are consistent in both the shape and the color.
Preferably, in step S3, the step of extracting, from the video, a video picture frame matched with the to-be-processed picture according to the image picture element feature determined by matching specifically includes:
extracting the uniqueness number of the video picture frame to which the matched image picture element feature belongs from the image picture element feature matched with the picture element feature;
according to the uniqueness number, the time axis position of the video picture frame to which the matched image picture element characteristics belong in the video;
and extracting the video picture frame matched with the picture to be processed from the video according to the time axis position.
The beneficial effects of the above technical scheme are: the uniqueness number is the time axis time point of each video picture frame in the video, so that the video picture frame matched with the picture to be processed can be quickly and accurately extracted from the video according to the uniqueness number, the video picture frames required by a large amount of video picture frames are avoided being required one by one, and the workload of searching the video picture frames is reduced.
Preferably, in step S3, the image picture element feature labeling processing on the video picture frame specifically includes:
and performing element feature edge contour line drawing marking processing on all the image picture element features contained in the extracted video picture frame.
The beneficial effects of the above technical scheme are: and performing element feature edge contour line delineation marking processing on all image picture element features contained in the extracted video picture frame, wherein the element feature edge contour line delineation marking processing can include but is not limited to thickening processing on edge contour lines of element features in pictures of the video picture frame, so that the visual brightness of image picture elements can be improved.
Preferably, in step S4, the returning the video picture frame with the mark processing completed to the terminal device, and the deleting the related processing record information of the picture to be processed specifically include:
and returning the video picture frame subjected to the marking processing to the terminal equipment after the fidelity compression processing is carried out on the video picture frame, and simultaneously deleting the picture data of the picture to be processed and the related cache data for carrying out segmentation and identification processing on the picture to be processed.
The beneficial effects of the above technical scheme are: and the video picture frame after the marking processing is subjected to fidelity compression processing and then returns to the terminal equipment, so that the transmission efficiency of the video picture frame can be improved, and the distortion in the transmission process can be avoided. In addition, the picture data of the picture to be processed and the related cache data for segmenting and identifying the picture to be processed are deleted, so that the picture to be processed can be prevented from being stolen and tampered, and the data security of picture processing is improved.
As can be seen from the content of the above embodiment, the image feature-based picture search method performs image feature extraction processing on a video in a video database to obtain image picture element features, and performs segmentation and identification processing on a picture to be processed from a terminal device to obtain picture element features; comparing the picture element characteristics with the image picture element characteristics to determine image picture element characteristics matched with the picture element characteristics; extracting a video picture frame matched with the picture to be processed from the video, and performing image picture element feature marking processing; and finally, returning the video picture frame subjected to marking processing to the terminal equipment, deleting the related processing record information of the picture to be processed, and comparing element characteristics contained in the picture to determine whether the picture is matched with the video picture frame so as to extract the matched video picture frame.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. The image feature-based picture searching method is characterized by comprising the following steps of:
step S1, carrying out image feature extraction processing on the video from a video database to obtain the image picture element features contained in the video, and storing the image picture element features into a block chain in a grouping manner;
step S2, carrying out segmentation and identification processing on the picture to be processed from the terminal equipment, thereby extracting and obtaining picture element characteristics contained in the picture to be processed; comparing the picture element characteristics with image picture element characteristics in a block chain, and determining image picture element characteristics matched with the picture element characteristics;
step S3, according to the image picture element characteristics determined by the matching, calibrating and extracting the video picture frame matched with the picture to be processed from the video; carrying out image picture element feature marking processing on the video picture frame;
and step S4, returning the video picture frame after the marking processing to the terminal equipment, and deleting the related processing record information of the picture to be processed.
2. The image feature-based picture searching method of claim 1, wherein:
in step S1, the image feature extraction processing is performed on the video from the video database, and obtaining the image element features included in the video specifically includes:
performing video picture frame extraction processing on a video in a video database to obtain all video picture frames contained in the video;
after pixel interpolation restoration processing and pixel graying conversion processing are carried out on each video picture frame, character element characteristics and symbol element characteristics contained in an image picture are extracted from each video picture frame and are used as the image picture element characteristics; wherein the character element features or the symbol element features comprise shape features and color features of characters or symbols existing in the image picture.
3. The image feature-based picture searching method of claim 2, wherein:
in step S1, the pixel interpolation restoration processing performed on each video picture frame includes performing peaceful and optimal pixel interpolation restoration by using the color saturation of the surrounding pixels, and the specific process includes:
step S101, obtaining a color comprehensive saturation value around each pixel point of each video picture frame according to the color RGB value of the pixels around each pixel point of each video picture frame by using the following formula (1),
Figure FDA0003575056650000021
in the above-mentioned formula (1),
Figure FDA0003575056650000022
representing a color comprehensive saturation RGB value around the ith row and jth column pixel point of the video picture frame; (a, b) e, D (i), D (j) -representing the pixels in the row a and the column b around the pixel in the row i and the column j of the video picture frame; (R, G, B) (a, B) representing RGB values of the a-th row and B-th column pixel points of the video picture frame; n (a, b) is epsilon, D (i), D (j) -represents the total number of pixel points surrounding the jth row and jth column of the ith row of the video picture frame;
step S102, obtaining the interpolation restoration rate of each pixel point of each video picture frame according to the color comprehensive saturation value around each pixel point of each video picture frame by using the following formula (2),
Figure FDA0003575056650000023
in the above formula (2), WR(i, j) representing the interpolation repair rate of the R value in the RGB values of the ith row and jth column pixel points of the video picture frame; wG(i, j) representing the interpolation repair rate of the G value in the RGB values of the ith row and jth column pixel points of the video picture frame; wB(i, j) representing the interpolation repair rate of the B value in the RGB values of the ith row and jth column pixel points of the video picture frame;
Figure FDA0003575056650000024
to represent
Figure FDA0003575056650000025
R value of (1);
Figure FDA0003575056650000026
represent
Figure FDA0003575056650000027
G value of (1);
Figure FDA0003575056650000028
to represent
Figure FDA0003575056650000029
B value of (1); max(a,b)∈,D(i),D(j)-R (a, b) -represents the maximum value of R values in the pixels surrounding the ith row and jth column of the video picture frame; max(a,b)∈,D(i),D(j)-G (a, b) -represents the maximum value of the G value in the pixels around the ith row and jth column of pixels in the video picture frame; max(a,b)∈,D(i),D(j)-B (a, B) -represents the maximum value of the B value among the pixels surrounding the jth row and jth column of pixels in the ith row of the video picture frame;
step S103, utilizing the following formula (3), carrying out interpolation restoration on the pixel points according to the interpolation restoration rate of each pixel point of each video picture frame and the RGB value of the corresponding pixel point,
Figure FDA0003575056650000031
in the above-mentioned formula (3),
Figure FDA0003575056650000032
and representing the RGB value obtained after interpolation restoration is carried out on the ith row and jth column pixel points of the video picture frame.
4. The image feature-based picture searching method of claim 2, wherein:
in said step S1, the storing of said image picture element feature groupings in a blockchain specifically comprises:
all video picture frames contained in the video are uniquely numbered, and the image picture element characteristics are added with the unique numbers of the video picture frames to which the image picture element characteristics belong; and then all image picture element characteristics belonging to the same video picture frame are stored in a block chain in a grouping mode.
5. The image feature-based picture searching method of claim 4, wherein:
in step S2, the segmenting and recognizing the to-be-processed picture from the terminal device, so that the extracting the picture element features included in the to-be-processed picture specifically includes:
acquiring picture profile distribution information of a picture to be processed from terminal equipment, and dividing the picture to be processed into a plurality of picture units according to the picture profile distribution information;
and carrying out shape and color recognition processing on each picture unit to obtain the shape characteristic and the color characteristic of each picture unit as the picture element characteristic.
6. The image feature-based picture searching method of claim 5, wherein:
in step S2, the comparing process is performed on the picture element features and the image picture element features in the block chain, and determining the image picture element features matched with the picture element features specifically includes:
comparing the picture element characteristics with image picture element characteristics in a block chain, and determining respective similarity values of the picture element characteristics and the image picture element characteristics on shape elements and color elements; if the similarity values of the shape element and the color element are larger than or equal to a preset similarity threshold, determining that the picture element characteristics are matched with the current image picture element characteristics; otherwise, determining that the picture element characteristics do not match the current image picture element characteristics.
7. The image feature-based picture searching method of claim 6, wherein:
in step S3, calibrating and extracting a video picture frame matched with the to-be-processed picture from the video according to the image picture element characteristics determined by the matching specifically includes:
extracting the uniqueness number of the video picture frame to which the matched image picture element feature belongs from the image picture element feature matched with the picture element feature;
according to the uniqueness number, the time axis position of the video picture frame to which the matched image picture element characteristics belong in the video;
and extracting a video picture frame matched with the picture to be processed from the video according to the position of the time axis.
8. The image feature-based picture searching method of claim 7, wherein:
in step S3, the processing of labeling image element features on the video picture frame specifically includes:
and carrying out element feature edge contour line drawing and marking processing on all image picture element features contained in the extracted video picture frame.
9. The image feature-based picture searching method of claim 8, wherein:
in step S4, returning the video picture frame that has completed the mark processing to the terminal device, and deleting the related processing record information of the picture to be processed specifically includes:
and returning the video picture frame subjected to the marking processing to the terminal equipment after the fidelity compression processing is carried out on the video picture frame, and simultaneously deleting the picture data of the picture to be processed and the related cache data for carrying out segmentation and identification processing on the picture to be processed.
CN202210337862.8A 2022-03-31 2022-03-31 Image feature-based picture searching method Active CN114741553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210337862.8A CN114741553B (en) 2022-03-31 2022-03-31 Image feature-based picture searching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210337862.8A CN114741553B (en) 2022-03-31 2022-03-31 Image feature-based picture searching method

Publications (2)

Publication Number Publication Date
CN114741553A true CN114741553A (en) 2022-07-12
CN114741553B CN114741553B (en) 2023-03-24

Family

ID=82279148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210337862.8A Active CN114741553B (en) 2022-03-31 2022-03-31 Image feature-based picture searching method

Country Status (1)

Country Link
CN (1) CN114741553B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115734045A (en) * 2022-11-15 2023-03-03 深圳市东明炬创电子股份有限公司 Video playing method, device, equipment and storage medium
CN117354525A (en) * 2023-12-05 2024-01-05 深圳市旭景数字技术有限公司 Video coding method and system for realizing efficient storage and transmission of digital media

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198110A (en) * 2013-03-28 2013-07-10 广州中国科学院软件应用技术研究所 Method and system for rapid video data characteristic retrieval
CN103336776A (en) * 2013-05-13 2013-10-02 云南瑞攀科技有限公司 Image searching method based on image content
US20160371305A1 (en) * 2014-08-01 2016-12-22 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and apparatus for generating picture search library, and picture search method, device and apparatus
CN106294690A (en) * 2016-08-05 2017-01-04 广东云海云计算科技有限公司 Image/video search platform based on content
CN106610987A (en) * 2015-10-22 2017-05-03 杭州海康威视数字技术股份有限公司 Video image retrieval method, device and system
CN107807979A (en) * 2017-10-27 2018-03-16 朱秋华 The searching method and device of a kind of similar pictures
CN109408652A (en) * 2018-09-30 2019-03-01 北京搜狗科技发展有限公司 A kind of image searching method, device and equipment
CN111475677A (en) * 2020-04-30 2020-07-31 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112232203A (en) * 2020-10-15 2021-01-15 平安科技(深圳)有限公司 Pedestrian recognition method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198110A (en) * 2013-03-28 2013-07-10 广州中国科学院软件应用技术研究所 Method and system for rapid video data characteristic retrieval
CN103336776A (en) * 2013-05-13 2013-10-02 云南瑞攀科技有限公司 Image searching method based on image content
US20160371305A1 (en) * 2014-08-01 2016-12-22 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and apparatus for generating picture search library, and picture search method, device and apparatus
CN106610987A (en) * 2015-10-22 2017-05-03 杭州海康威视数字技术股份有限公司 Video image retrieval method, device and system
CN106294690A (en) * 2016-08-05 2017-01-04 广东云海云计算科技有限公司 Image/video search platform based on content
CN107807979A (en) * 2017-10-27 2018-03-16 朱秋华 The searching method and device of a kind of similar pictures
CN109408652A (en) * 2018-09-30 2019-03-01 北京搜狗科技发展有限公司 A kind of image searching method, device and equipment
CN111475677A (en) * 2020-04-30 2020-07-31 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112232203A (en) * 2020-10-15 2021-01-15 平安科技(深圳)有限公司 Pedestrian recognition method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115734045A (en) * 2022-11-15 2023-03-03 深圳市东明炬创电子股份有限公司 Video playing method, device, equipment and storage medium
CN117354525A (en) * 2023-12-05 2024-01-05 深圳市旭景数字技术有限公司 Video coding method and system for realizing efficient storage and transmission of digital media
CN117354525B (en) * 2023-12-05 2024-03-15 深圳市旭景数字技术有限公司 Video coding method and system for realizing efficient storage and transmission of digital media

Also Published As

Publication number Publication date
CN114741553B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN114741553B (en) Image feature-based picture searching method
CN100550038C (en) Image content recognizing method and recognition system
WO2016065701A1 (en) Image text recognition method and device
CN111242027B (en) Unsupervised learning scene feature rapid extraction method fusing semantic information
CN103841438B (en) Information-pushing method, information transmission system and receiving terminal for digital television
CN109886978B (en) End-to-end alarm information identification method based on deep learning
WO2017088479A1 (en) Method of identifying digital on-screen graphic and device
CN111625687B (en) Method and system for quickly searching people in media asset video library through human faces
CN114067444A (en) Face spoofing detection method and system based on meta-pseudo label and illumination invariant feature
CN111507138A (en) Image recognition method and device, computer equipment and storage medium
CN114782770A (en) License plate detection and recognition method and system based on deep learning
CN111401171A (en) Face image recognition method and device, electronic equipment and storage medium
CN111814576A (en) Shopping receipt picture identification method based on deep learning
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
CN110991434B (en) Self-service terminal certificate identification method and device
CN113723410B (en) Digital identification method and device for nixie tube
CN116434096A (en) Spatiotemporal motion detection method and device, electronic equipment and storage medium
CN112528994B (en) Free angle license plate detection method, license plate recognition method and recognition system
CN111079749B (en) End-to-end commodity price tag character recognition method and system with gesture correction
CN110135274B (en) Face recognition-based people flow statistics method
CN117197864A (en) Certificate classification recognition and crown-free detection method and system based on deep learning
CN111832497A (en) Text detection post-processing method based on geometric features
CN108734158B (en) Real-time train number identification method and device
CN111382703B (en) Finger vein recognition method based on secondary screening and score fusion
CN104504385A (en) Recognition method of handwritten connected numerical string

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant