CN110796062B - Method and device for precisely matching and displaying object frame and storage device - Google Patents

Method and device for precisely matching and displaying object frame and storage device Download PDF

Info

Publication number
CN110796062B
CN110796062B CN201911018603.3A CN201911018603A CN110796062B CN 110796062 B CN110796062 B CN 110796062B CN 201911018603 A CN201911018603 A CN 201911018603A CN 110796062 B CN110796062 B CN 110796062B
Authority
CN
China
Prior art keywords
video frame
frame
current video
highlight
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911018603.3A
Other languages
Chinese (zh)
Other versions
CN110796062A (en
Inventor
鞠金采
敬太洋
黄锐鑫
陈杰
余鸿浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huashi Zhijian Technology Co ltd
Original Assignee
Zhejiang Huashi Zhijian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huashi Zhijian Technology Co ltd filed Critical Zhejiang Huashi Zhijian Technology Co ltd
Priority to CN201911018603.3A priority Critical patent/CN110796062B/en
Publication of CN110796062A publication Critical patent/CN110796062A/en
Application granted granted Critical
Publication of CN110796062B publication Critical patent/CN110796062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for accurately matching and displaying an article frame and a storage device. The method comprises the following steps: acquiring a current video frame of a moving object; if the current video frame is determined to be a video frame which does not need to be intelligently analyzed, calculating the offset distance of the current video frame relative to the historical video frame; increasing the offset distance on the basis of the position of the highlight frame in the historical video frame, and further obtaining the position of the highlight frame of the current video frame; and displaying the highlight border on the current video frame according to the highlight border position of the current video frame. Through the method, the matching degree of the frame position and the object position can be improved under the condition of reducing the analysis frame rate of the intelligent algorithm, frame jumping and shaking are avoided, and user experience is improved.

Description

Method and device for precisely matching and displaying object frame and storage device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage apparatus for accurately matching and displaying a frame of an article.
Background
The X-ray security inspection equipment intelligently analyzes the acquired image, identifies specific articles, and superposes a highlight frame around the articles to achieve the purpose of intelligently identifying forbidden articles. During operation of the X-ray security inspection equipment, the positions of the articles in the collected images are constantly changed, and the highlighted frames of the articles also constantly move along with the articles.
One of the technical solutions in the prior art is to process all video frames by using an intelligent algorithm analysis, obtain an intelligent frame for each video frame, and draw a highlight frame of an article according to the intelligent frame. In the method, the frame rate analyzed by the intelligent algorithm is too high, and the load is applied to a CPU.
Another technical scheme in the prior art is to select a part of a plurality of video frames based on a preset interval to perform intelligent algorithm analysis, obtain an intelligent frame of the part of the video frames, and draw a highlight frame of an article according to the intelligent frame. In the method, the interval of the movement of the highlight frame at one time is larger, and when the object continuously moves, the visual effect of jumping of the frame which is not matched with the movement of the object occurs, so that the user experience is poorer.
Another technical scheme in the prior art is to select a part of a plurality of video frames based on a preset interval to perform intelligent algorithm analysis to obtain intelligent frames of the part of the video frames, calculate the offset of an article according to the coordinates of the article of two adjacent intelligent frames, average the offset to a plurality of video frames between the two intelligent frames, and add the average offset on the basis of a highlight frame drawn by the intelligent frames to redraw the highlight frame of the article. According to the method, when the article starts to move, the highlight frame is immobile within a period of time, so that the frame cannot keep up with the visual effect of the movement of the article, and when the article stops moving, the highlight frame can continue to move for a short distance in the moving direction and finally returns to the position consistent with the article, so that the frame is displayed in a shaking mode, and the user experience is poor.
Disclosure of Invention
The application provides a method, a device and a storage device for displaying the object frame in an accurate matching manner, which can improve the matching degree of the frame position and the object position, and further enable the object position in the frame and the video to be accurately matched under the condition of reducing the intelligent algorithm analysis frame rate.
In order to solve the technical problem, the application adopts a technical scheme that: a method for accurately matching and displaying a border of an article is provided, and the method comprises the following steps:
acquiring a current video frame of a moving object;
if the current video frame is determined to be a video frame which does not need to be intelligently analyzed, calculating the offset distance of the current video frame relative to the historical video frame;
increasing the offset distance on the basis of the position of the highlight frame in the historical video frame, and further obtaining the position of the highlight frame of the current video frame;
and displaying the highlight border on the current video frame according to the highlight border position of the current video frame.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus for an item bezel exact match display, the apparatus comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the method for accurately matching and displaying the object frame;
the processor is configured to execute the program instructions stored by the memory to display an exact match of an item border.
In order to solve the above technical problem, the present application adopts another technical solution that: the storage device is provided with a program instruction which can realize the accurate matching display of the object frame.
The beneficial effect of this application is: the method for accurately matching and displaying the object frame comprises the steps of selecting partial video frames according to a preset frame interval to carry out intelligent algorithm analysis, displaying a highlight frame on the video frames subjected to the intelligent analysis according to the intelligent frames, and drawing the highlight frame of the object on the video frames not subjected to the intelligent analysis according to the offset distance between the video frames and the previous intelligent frame and the highlight frame position of the intelligent frames. Through the method, the matching degree of the frame position and the object position can be improved under the condition of reducing the analysis frame rate of the intelligent algorithm, frame jumping and shaking are avoided, and user experience is improved.
Drawings
FIG. 1 is a flowchart illustrating a method for displaying a frame of an article in an exact match according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for displaying a frame of an article in an exact match according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a frame display of an intelligent frame according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a moving distance between two adjacent video frames according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a bounding box display of a plurality of video frames in an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for displaying item borders in an exact match according to a third embodiment of the present invention;
FIG. 7 is a flowchart illustrating a method for displaying an exact match of an item border according to a fourth embodiment of the present invention;
FIG. 8 is a schematic diagram of a software structure of an apparatus for displaying a frame of an article in an exact match according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a hardware configuration of an apparatus for displaying a frame of an article in an exact match according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a memory device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Fig. 1 is a flowchart illustrating a method for displaying an item frame in an exact match according to a first embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
s101, acquiring a current video frame of the moving object.
And S102, if the current video frame is determined to be a video frame which does not need to be intelligently analyzed, calculating the offset distance of the current video frame relative to the historical video frame.
In step S102, the historical video frame may be a previous smart frame, and the historical video frame may also be any non-smart frame between the current video frame and the previous smart frame.
S103, increasing the offset distance on the basis of the position of the highlight frame in the historical video frame, and further obtaining the position of the highlight frame of the current video frame.
In step S103, since the highlight frame has already been drawn in the history video frame, the highlight frame of the object in the current video frame can be obtained by directly translating according to the drawn highlight frame in the history video frame.
And S104, displaying the highlight border on the current video frame according to the highlight border position of the current video frame.
In an optional embodiment, after step S101, the following steps are further included:
and if the current video frame is determined to be the video frame needing intelligent analysis, performing intelligent algorithm analysis on the current video frame to obtain an intelligent frame corresponding to the video frame, wherein the intelligent frame comprises the highlight border of the article.
In this embodiment, for a video frame that is analyzed by an intelligent algorithm, a highlight frame of an article is displayed according to the corresponding intelligent frame. And for the video frames which are not subjected to intelligent algorithm analysis, the highlight borders of the articles on the video frames are drawn and displayed by utilizing the highlight borders on the historical video frames on which the highlight borders are drawn.
In this embodiment, it is determined whether the current video frame needs to be intelligently analyzed according to a preset frame interval. Specifically, firstly, judging whether a preset interval condition is met according to the frame number of the current video frame to obtain a judgment result; and then, when the judgment result is yes, determining whether the current video frame needs to be intelligently analyzed.
Fig. 2 is a flowchart illustrating a method for displaying an item frame in an exact match according to a second embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 2 if the results are substantially the same. As shown in fig. 2, the method comprises the steps of:
s201, a plurality of video frames of the moving object are obtained.
In step S201, when the article is conveyed into the detection area of the security inspection apparatus, a capturing unit in the security inspection apparatus, such as an X-ray machine, may capture a plurality of video frames of the article in the moving direction in real time, where the moving direction may be left or right or inclined, and the embodiment of the present invention takes left moving as an example, and the scheme of moving right or inclined is the same as the scheme of moving left. After acquiring a plurality of video frames of the moving object, the process proceeds to step S202.
S202, according to a preset frame interval, selecting one video frame from the plurality of video frames at intervals of the frame interval to perform intelligent algorithm analysis so as to obtain an intelligent frame corresponding to the selected video frame, wherein the intelligent frame comprises a highlight frame of the article.
In step S202, a part of the video frames may be selected from the plurality of video frames obtained in step S201 at a preset frame interval for performing intelligent algorithm analysis, and an intelligent frame corresponding to the video frame may be obtained. The frame interval may be the number of video frames spaced between two adjacent video frames selected for the intelligent algorithm analysis, and the frame interval may also be the spacing distance or the spacing time between two adjacent video frames selected for the intelligent algorithm analysis, and the user may select the frame interval according to the performance of the security inspection apparatus, which is not limited in this embodiment, for example, 15 video frames are collected in step S201, the frame interval is 4, and the video frames selected for the intelligent algorithm analysis may include the 1 st video frame, the 6 th video frame, and the 11 th video frame. It is to be understood that in practical applications, the security device may acquire approximately 60 video frames in one second, and so on, and thus the above example is only for ease of understanding and is not intended to be limiting.
In an alternative embodiment, the intelligent algorithm analysis is performed on the video frame, and comprises: and identifying the object in the video frame, and superposing a highlight border around the object. Specifically, the trained object classification model can be used for processing the video frame so as to identify and classify the objects in the video frame, then the objects are identified according to the categories, and highlight borders are superimposed around the objects, so that the purpose of intelligently identifying forbidden objects or dangerous objects is achieved.
S203, calculating the offset distance between the video frame and the previous intelligent frame.
In step S203, the moving distance between the current video frame and the nearest previous intelligent frame may be directly calculated, and the moving distance is the offset distance; specifically, referring to fig. 3 and fig. 4 together, for a plurality of video frames obtained in step S201, the moving distance of each video frame relative to its previous video frame is cached and recorded, for example, as shown in fig. 4, the following steps are recorded in sequence: the 2 nd video frame and the 1 st video frame move by a distance x 1; the 3 rd video frame and the 2 nd video frame move by a distance x 2; the 4 th video frame and the 3 rd video frame move by a distance x 3; the 5 th video frame and the 4 th video frame move by a distance x 4; and (4) moving the distance x5 between the 6 th video frame and the 5 th video frame, and acquiring the moving distance between every two adjacent video frames between the video frame and the previous intelligent frame. For example, the current video frame is the 6 th video frame, the previous intelligent frame is the 1 st video frame, and all pairwise moving distances x1, x2, x3, x4 and x5 between the 6 th video frame and the 1 st video frame are obtained; the obtained moving distance is accumulated to obtain an offset distance between the video frame and the previous smart frame, for example, the offset distance is x1+ x2+ x3+ x4+ x 5.
In this embodiment, the moving distance between a video frame and an intelligent frame, or the moving distance between two adjacent video frames, hereinafter referred to as the moving distance between two video frames, can be obtained, for example, based on a contour recognition algorithm, according to a matching result of image contour features between two video frames, the moving distance between two video frames can be obtained; for another example, the moving distance between two video frames is obtained according to the similarity of the image color features between the two video frames, and specifically, the histogram distance between two adjacent video frames is calculated according to the color histogram of the video frames; or, according to the binary image of the video frame, calculating the Euclidean distance between two adjacent video frames; in addition, the time difference between two video frames can be combined, specifically, the moving distance between two video frames can be obtained according to the time difference between two video frames and the image distance of the color histograms of the two video frames; or acquiring the moving distance between the two video frames according to the time difference between the two video frames and the image distance of the binary image of the two video frames; wherein the image distance is a chi-square distance, a Euclidean distance, or an L1-norm distance.
And S204, drawing the highlight border of the corresponding article in the video frame according to the offset distance and the highlight border position of the article in the previous intelligent frame.
In step S204, for the video frame selected for the intelligent algorithm analysis, a highlight frame of the object may be drawn according to the corresponding intelligent frame; for video frames that are not analyzed by the intelligent algorithm, the highlighted border of the item is drawn based on the most recent intelligent frame in front of the frame. Specifically, please refer to fig. 5, first, the position coordinates of the highlight frame of the object in the smart frame are obtained; then, accumulating the offset of the offset distance on the basis of the position coordinates of the highlight frame of the object in the intelligent frame along the moving direction of the object to obtain the position coordinates of the highlight frame of the object in the video frame; and finally, drawing the highlighted border of the article based on the position coordinates of the highlighted border of the article in the video frame. The drawing schematic diagram of the highlight border in the previous intelligent frame is the 1 st video frame, and the 2 nd to 6 th video frames is shown in fig. 5, the position coordinate of the highlight border of the article in the intelligent frame (obtained by processing the 1 st video frame) is Pid, in the embodiment, the highlight border of the article is drawn by the offset distance of the video frame without identifying the article, the offset distance of the 2 nd video frame and the intelligent frame is x1, and the position coordinate of the highlight border in the 2 nd video frame is Pid + x 1; the offset distance between the 3 rd video frame and the intelligent frame is x1+ x2, and the position coordinates of the highlight border in the 3 rd video frame are Pid + x1+ x 2; the offset distance between the 4 th video frame and the intelligent frame is x1+ x2+ x3, and in the 4 th video frame, the position coordinates of a highlight border are Pid + x1+ x2+ x 3; the offset distance between the 5 th video frame and the intelligent frame is x1+ x2+ x3+ x4, and in the 5 th video frame, the position coordinates of the highlighted border are Pid + x1+ x2+ x3+ x 4; the offset distance between the 6 th video frame and the intelligent frame is x1+ x2+ x3+ x4+ x5, and the position coordinates of the highlight border in the 6 th video frame are Pid + x1+ x2+ x3+ x4+ x 5.
And S205, displaying the highlighted frame of the article.
In step S205, the video frame selected for the intelligent algorithm analysis may be displayed according to the highlighted border of the item drawn by the corresponding intelligent frame; and displaying the video frames which are not subjected to the intelligent algorithm analysis according to the highlighted border of the article drawn in the step S204.
Fig. 6 is a flowchart illustrating a method for displaying an item frame in an exact match according to a third embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 6 if the results are substantially the same. As shown in fig. 6, the method includes the steps of:
s301, collecting a video frame.
S302, calculating the moving distance of the current video frame relative to the previous video frame, and caching and recording the moving distance.
S303, judging whether a preset interval condition is met or not according to the frame number of the current video frame to obtain a judgment result.
S304, when the judgment result is yes, carrying out intelligent algorithm analysis on the current video frame to obtain an intelligent frame corresponding to the selected video frame, and caching and recording the position of a highlight frame of an article in the intelligent frame.
S305, when the judgment result is negative, calculating the offset distance between the video frame and the previous intelligent frame, and drawing the highlight frame corresponding to the object in the video frame according to the offset distance and the highlight frame position of the object in the previous intelligent frame.
And S306, displaying the highlighted frame of the article.
In step S302, a moving distance between two video frames may be obtained according to a matching result of image contour features between the two video frames based on a contour recognition algorithm; and acquiring the moving distance between the two video frames according to the similarity of the image color characteristics between the two video frames. Refer specifically to the description of step S303 of the first embodiment.
In step S303, the preset interval condition may be that the frame number of the current video frame is an integer multiple of x +1, where x is a preset frame interval.
In step S305, the previous smart frame is a smart frame that precedes and is closest to the current video frame. Step S305 may refer to the descriptions of step S203 and step S204 in the second embodiment, which are not described in detail herein.
In step S306, the video frame selected for the intelligent algorithm analysis may be displayed according to the highlighted border of the item drawn by the corresponding intelligent frame; and displaying the highlighted frame of the article drawn according to the step S305 for the video frame which is not subjected to the intelligent algorithm analysis.
Fig. 7 is a flowchart illustrating a method for displaying an item frame in an exact match according to a fourth embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 7 if the results are substantially the same. As shown in fig. 7, the method includes the steps of:
s401, caching the new intelligent frame as the current intelligent frame.
S402, redrawing the object highlight border in the current intelligent frame in the video display area.
S403, a video frame is received.
S404, calculating the offset distance between the current video frame and the current intelligent frame.
S405, accumulating the offset of the offset distance on the basis of the position coordinates of the highlight frame of the object in the current intelligent frame along the moving direction of the object to obtain the position coordinates of the highlight frame of the object in the current video frame.
S406, drawing the highlighted frame of the item based on the position coordinates of the highlighted frame of the item in the current video frame.
And S407, displaying the highlighted frame of the article.
S408, judging whether a new intelligent frame is received, if so, executing S401; if not, S403 is executed.
In this embodiment, the security inspection device processes part of the video frames to obtain corresponding intelligent frames in real time, and after obtaining a new intelligent frame, the new intelligent frame is buffered as a current intelligent frame, and the corresponding video frame is buffered to ensure synchronization with the object image.
Fig. 8 is a schematic structural diagram of an apparatus for displaying a frame of an article in an exact match manner according to an embodiment of the present invention. As shown in fig. 8, the apparatus 70 includes an acquisition module 71, an analysis module 72, a calculation module 73, a rendering module 74, and a display module 75.
The acquiring module 71 is configured to acquire a current video frame of the mobile object;
the analysis module 72 is configured to select one video frame from the plurality of video frames at intervals of a preset frame interval to perform intelligent algorithm analysis, so as to obtain an intelligent frame corresponding to the selected video frame, where the intelligent frame includes a highlight frame of the article;
a calculating module 73, configured to calculate an offset distance between the current video frame and the historical video frame if it is determined that the current video frame is a video frame that does not need to be intelligently analyzed;
the drawing module 74 is configured to increase the offset distance based on the position of the highlight frame in the historical video frame, so as to obtain the position of the highlight frame of the current video frame;
the display module 75 is configured to display the highlight frame on the current video frame according to the highlight frame position of the current video frame, or display the highlight frame according to the highlight frame position on the intelligent frame corresponding to the current video frame.
Optionally, the obtaining module 71 is further configured to buffer and record a moving distance of each video frame relative to its previous video frame.
Optionally, the analysis module 72 is further configured to obtain a moving distance between two video frames according to the similarity of image color features between the two video frames.
Optionally, the analysis module 72 is further configured to obtain a moving distance between two video frames according to a matching result of image contour features between the two video frames.
Optionally, the calculating module 73 is further configured to obtain a moving distance between every two adjacent video frames between the video frame and the previous smart frame; and accumulating the obtained moving distance to obtain the offset distance between the video frame and the previous intelligent frame.
Optionally, the drawing module 74 is further configured to obtain coordinates of a position of a highlight frame of the item in the smart frame; accumulating the offset of the offset distance on the basis of the position coordinates of the highlight border of the object in the intelligent frame along the moving direction of the object to obtain the position coordinates of the highlight border of the object in the video frame; and drawing the highlighted border of the item based on the position coordinates of the highlighted border of the item in the video frame.
Optionally, the calculating module 73 is further configured to receive a setting instruction of the frame interval input by a user; collecting a video frame; judging whether a preset interval condition is met or not according to the frame number of the video frame to obtain a judgment result; and when the judgment result is yes, carrying out intelligent algorithm analysis on the video frame to obtain an intelligent frame corresponding to the selected video frame.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an apparatus for precisely matching and displaying a frame of an article according to an embodiment of the present invention. As shown in fig. 9, the apparatus 80 includes a processor 81 and a memory 82 coupled to the processor 81.
The memory 82 stores program instructions for implementing the method for displaying the item border with exact matching according to any of the above embodiments.
Processor 81 is operative to execute program instructions stored in memory 82 to implement a method for the exact match display of item borders.
The processor 81 may also be referred to as a CPU (Central Processing Unit). The processor 81 may be an integrated circuit chip having signal processing capabilities. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a memory device according to an embodiment of the invention. The storage device of the embodiment of the present invention stores program instructions 91 capable of implementing all the methods described above, where the program instructions 91 may be stored in the storage device in the form of a software product, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (11)

1. A method for accurately matching and displaying a border of an article, the method comprising:
acquiring a current video frame of a moving object;
if the current video frame is determined to be a video frame which does not need to be intelligently analyzed, calculating the offset distance of the current video frame relative to the historical video frame; wherein the offset distance of the current video frame relative to the historical video frame is obtained based on a moving distance between at least one group of video frame pairs, the at least one group of video frame pairs comprises every two adjacent video frames between the current video frame and the historical video frame, and the moving distance between the video frame pairs is calculated based on a matching result of image contour features or similarity of image color features between the video frame pairs;
increasing the offset distance on the basis of the position of the highlight frame in the historical video frame, and further obtaining the position of the highlight frame of the current video frame;
and displaying the highlight border on the current video frame according to the highlight border position of the current video frame.
2. The method of claim 1, wherein after obtaining the current video frame of the moving object, further comprising:
and if the current video frame is determined to be the video frame needing intelligent analysis, performing intelligent algorithm analysis on the current video frame to obtain an intelligent frame corresponding to the video frame, wherein the intelligent frame comprises a highlight frame of the article.
3. The method of claim 2, wherein performing intelligent algorithmic analysis on the current video frame comprises:
and identifying the object in the current video frame, and superposing a highlight frame around the object.
4. The method of claim 2, wherein the historical video frame is a previous smart frame;
or the historical video frame is any non-intelligent frame between the current video frame and the previous intelligent frame.
5. The method of claim 1, wherein after obtaining the current video frame of the moving object, further comprising:
and determining whether the current video frame needs to be intelligently analyzed according to a preset frame interval.
6. The method of claim 5, wherein determining whether the current video frame requires intelligent analysis according to the preset frame interval comprises:
judging whether a preset interval condition is met or not according to the frame number of the current video frame to obtain a judgment result;
and when the judgment result is yes, determining whether the current video frame needs to be intelligently analyzed.
7. The method of claim 1 or 2, further comprising, after acquiring the video frame of the moving item:
the buffer records the moving distance of the current video frame relative to the previous video frame.
8. The method of any of claim 7, wherein calculating the offset distance of the current video frame relative to the historical video frame comprises:
acquiring the moving distance between every two adjacent video frames between the current video frame and the historical video frame;
and accumulating the obtained moving distance to obtain the offset distance between the current video frame and the historical video frame.
9. The method of claim 1, wherein the historical video frame is a previous smart frame, and the increasing the offset distance based on a highlight border position in the historical video frame to obtain the highlight border position of the current video frame comprises:
obtaining the position coordinates of the highlight frame of the object in the intelligent frame;
accumulating the offset of the offset distance on the basis of the position coordinates of the highlight border of the article in the intelligent frame along the moving direction of the article to obtain the position coordinates of the highlight border of the article in the video frame;
drawing the highlighted border of the item based on the highlighted border position coordinates of the item in the video frame.
10. An apparatus for displaying a frame of an item in an exact match, the apparatus comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing a method for exact match display of item borders according to any of claims 1-9;
the processor is configured to execute the program instructions stored by the memory to display an exact match of an item border.
11. A storage device having stored thereon program instructions for enabling an exact fit display of the item borders according to any of claims 1-9.
CN201911018603.3A 2019-10-24 2019-10-24 Method and device for precisely matching and displaying object frame and storage device Active CN110796062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911018603.3A CN110796062B (en) 2019-10-24 2019-10-24 Method and device for precisely matching and displaying object frame and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911018603.3A CN110796062B (en) 2019-10-24 2019-10-24 Method and device for precisely matching and displaying object frame and storage device

Publications (2)

Publication Number Publication Date
CN110796062A CN110796062A (en) 2020-02-14
CN110796062B true CN110796062B (en) 2022-08-09

Family

ID=69441170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911018603.3A Active CN110796062B (en) 2019-10-24 2019-10-24 Method and device for precisely matching and displaying object frame and storage device

Country Status (1)

Country Link
CN (1) CN110796062B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069984A (en) * 2020-09-03 2020-12-11 浙江大华技术股份有限公司 Object frame matching display method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492115A (en) * 2017-08-30 2017-12-19 北京小米移动软件有限公司 The detection method and device of destination object
CN107832683A (en) * 2017-10-24 2018-03-23 亮风台(上海)信息科技有限公司 A kind of method for tracking target and system
CN108983306A (en) * 2018-06-06 2018-12-11 浙江大华技术股份有限公司 A kind of method and rays safety detection apparatus of article frame flow display
CN110365902A (en) * 2019-07-23 2019-10-22 湖南省湘电试研技术有限公司 The video anti-fluttering method and system of intelligent safety helmet based on Harris Corner Detection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101400001B (en) * 2008-11-03 2010-06-02 清华大学 Generation method and system for video frame depth chart
CN102457724B (en) * 2010-10-22 2014-03-12 Tcl集团股份有限公司 Image motion detecting system and method
CN106534951B (en) * 2016-11-30 2020-10-09 北京小米移动软件有限公司 Video segmentation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492115A (en) * 2017-08-30 2017-12-19 北京小米移动软件有限公司 The detection method and device of destination object
CN107832683A (en) * 2017-10-24 2018-03-23 亮风台(上海)信息科技有限公司 A kind of method for tracking target and system
CN108983306A (en) * 2018-06-06 2018-12-11 浙江大华技术股份有限公司 A kind of method and rays safety detection apparatus of article frame flow display
CN110365902A (en) * 2019-07-23 2019-10-22 湖南省湘电试研技术有限公司 The video anti-fluttering method and system of intelligent safety helmet based on Harris Corner Detection

Also Published As

Publication number Publication date
CN110796062A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN108985199B (en) Detection method and device for commodity taking and placing operation and storage medium
EP3295424B1 (en) Systems and methods for reducing a plurality of bounding regions
CN105938622B (en) Method and apparatus for detecting object in moving image
US8989448B2 (en) Moving object detecting device, moving object detecting method, moving object detection program, moving object tracking device, moving object tracking method, and moving object tracking program
US11600008B2 (en) Human-tracking methods, systems, and storage media
CN111488791A (en) On-device classification of fingertip movement patterns as gestures in real time
US9626577B1 (en) Image selection and recognition processing from a video feed
US20140355883A1 (en) Method and system for recognizing information
JP5780142B2 (en) Image processing apparatus and image processing method
KR20180054808A (en) Motion detection within images
CN109977824B (en) Article taking and placing identification method, device and equipment
WO2022105740A1 (en) Video processing method and apparatus, readable medium, and electronic device
CN110782433A (en) Dynamic information violent parabolic detection method and device based on time sequence and storage medium
CN111666792B (en) Image recognition method, image acquisition and recognition method, and commodity recognition method
CN110717452B (en) Image recognition method, device, terminal and computer readable storage medium
CN110782464B (en) Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device
CN110796062B (en) Method and device for precisely matching and displaying object frame and storage device
US9400929B2 (en) Object detection device and method for detecting an object by performing a raster scan on a scan window
CN115273063A (en) Method and device for determining object information, electronic equipment and storage medium
JP2019517079A (en) Shape detection
CN110807457A (en) OSD character recognition method, device and storage device
CN107770487B (en) Feature extraction and optimization method, system and terminal equipment
CN111986229A (en) Video target detection method, device and computer system
JP6305856B2 (en) Image processing apparatus, image processing method, and program
CN110348353B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220602

Address after: 310056 floor 4, building 6, No. 1181, Bin'an Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Huashi Zhijian Technology Co.,Ltd.

Address before: No.1187 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant