CN112052713B - Video processing method and device and handheld camera - Google Patents

Video processing method and device and handheld camera Download PDF

Info

Publication number
CN112052713B
CN112052713B CN202010296289.1A CN202010296289A CN112052713B CN 112052713 B CN112052713 B CN 112052713B CN 202010296289 A CN202010296289 A CN 202010296289A CN 112052713 B CN112052713 B CN 112052713B
Authority
CN
China
Prior art keywords
image recognition
candidate image
recognition algorithms
recognition algorithm
execution time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010296289.1A
Other languages
Chinese (zh)
Other versions
CN112052713A (en
Inventor
康含玉
梁峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Moxiang Network Technology Co ltd
Original Assignee
Shanghai Moxiang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Moxiang Network Technology Co ltd filed Critical Shanghai Moxiang Network Technology Co ltd
Priority to CN202010296289.1A priority Critical patent/CN112052713B/en
Priority to PCT/CN2020/099833 priority patent/WO2021208256A1/en
Publication of CN112052713A publication Critical patent/CN112052713A/en
Application granted granted Critical
Publication of CN112052713B publication Critical patent/CN112052713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the application provides a video processing method, video processing equipment and a handheld camera, and the method comprises the steps of firstly, calculating information according to priorities corresponding to candidate image recognition algorithms, and determining a target image recognition algorithm from a plurality of candidate image recognition algorithms; and then, carrying out image processing on the image frames to be processed in the video by using a target image recognition algorithm. Therefore, the embodiment of the invention not only can use various image recognition algorithms to carry out corresponding image processing on the image frames in the video and meet diversified video processing or description requirements, but also can reduce the image processing time and meet the requirement of carrying out real-time processing in the video shooting process because the image frames to be processed are only subjected to image processing by using the target image recognition algorithm.

Description

Video processing method and device and handheld camera
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a video processing method and device and a handheld camera.
Background
With the development of video processing technology, when a video segment is processed, objects such as objects, scenes and the like contained in the video segment need to be described from a plurality of different angles, and as no algorithm can meet all video identification and description requirements at present, in practical application, a plurality of different algorithms can be used for processing the video segment according to requirements, but if all selected algorithms are used for processing each image frame, the problem that the real-time performance cannot be met due to too much time consumption is brought.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a video processing method, a device and a handheld camera, so as to overcome the defect in the prior art that processing a video segment by using multiple algorithms consumes much time.
The embodiment of the application provides a video processing method, which comprises the following steps:
determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
Optionally, the priority calculation information includes execution time information for identifying a latest execution time of the candidate image recognition algorithm; correspondingly, the determining a target image recognition algorithm from the plurality of candidate image recognition algorithms according to the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms, so as to determine the target image recognition algorithm according to a sequencing result.
Optionally, the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
Optionally, the sorting the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sequencing the candidate image recognition algorithms according to the quotient of the idle time interval value and the longest operation time interval value.
Optionally, the priority calculation information further includes weight information for identifying the importance degree of the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
Optionally, the weight information includes a weight coefficient; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient.
Optionally, the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient comprises:
and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
Optionally, the number of the target image recognition algorithms is 1.
Optionally, after the image processing is performed on the image frame to be processed in the video by using the target image recognition algorithm, the method further includes: updating the priority calculation information corresponding to the candidate image recognition algorithm, and updating the range of the image frame to be processed.
Optionally, the number of the image frames to be processed is 1, and the updating the range of the image frames to be processed includes:
and determining an image frame after the image frame to be processed as a new image frame to be processed.
An embodiment of the present application further provides a video processing device, including: the device comprises a memory, a processor and a video collector, wherein the video collector is used for collecting a target to be tracked in a target area; the memory is used for storing program codes; the processor, invoking the program code, when executed, is configured to:
determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
An embodiment of the present application further provides a handheld camera, including the foregoing video processing device, further including: the carrier is fixedly connected with the video collector and used for carrying at least one part of the video collector.
Optionally, the carrier includes, but is not limited to, a handheld pan/tilt head.
Optionally, the handheld pan/tilt head is a handheld triaxial pan/tilt head.
Optionally, the video collector includes, but is not limited to, a camera for a handheld three-axis pan-tilt.
In the embodiment of the application, firstly, a target image recognition algorithm is determined from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms; and then, carrying out image processing on the image frames to be processed in the video by using a target image recognition algorithm. Therefore, the embodiment of the invention not only can use various image recognition algorithms to carry out corresponding image processing on the image frames in the video and meet diversified video processing or description requirements, but also can reduce the image processing time and meet the real-time processing requirement in the video shooting process because the image frames to be processed only use the target image recognition algorithm to carry out image processing.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a video processing method according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of a video processing method according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a video processing apparatus according to a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of a handheld pan/tilt head provided in the fifth embodiment of the present application;
fig. 6 is a schematic structural diagram of a connection between a handheld cradle head and a mobile phone according to a fifth embodiment of the present application;
fig. 7 is a schematic structural diagram of a handheld pan/tilt head according to a fifth embodiment of the present application.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
It should be understood that the terms "first," "second," and the like as used in the description and in the claims, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Also, the use of the terms "a" or "an" and the like do not denote a limitation of quantity, but rather denote the presence of at least one.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Example one
An embodiment of the present application provides a video processing method, as shown in fig. 1, fig. 1 is a schematic flowchart of the video processing method provided in the embodiment of the present application, and includes the following steps:
step S101, determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms.
In this embodiment, the candidate image recognition algorithm is used to recognize image frames in a video and obtain corresponding description information. The objects identified by different candidate image identification algorithms, the generated description information, the consumed time and the required processor resources may all be different, and the specific type and number of the candidate image identification algorithms are not limited in this embodiment, and may be selected according to the requirement of video description in practical application.
For example, if species in the target image are to be identified to generate descriptive information for the identified species, an image recognition algorithm for identifying different species, such as people, cats, dogs, etc., may be selected; if the scene in the target image needs to be identified to generate description information of the identified scene, an image identification algorithm for identifying different scenes such as sunny days, rainy days, dark nights and the like can be selected; if the facial expression in the target image needs to be recognized to generate description information of the recognized facial expression, an image recognition algorithm for recognizing different facial expressions such as smile, laugh, cry and the like can be selected.
In this embodiment, the priority calculation information is used to identify the use priority of the plurality of candidate image recognition algorithms. The embodiment is not limited to the calculation, identification, recording format, etc. of the priority calculation information.
For example, the priority calculation information may score or rank all candidate image recognition algorithms according to a preset priority calculation model, and identify the use priority of multiple candidate image recognition algorithms by using numbers or characters.
Optionally, since the attribute information corresponding to the candidate image recognition algorithm changes with the passage of time, in order to ensure real-time data processing, the priority calculation information corresponding to the candidate image recognition algorithm may be obtained according to the attribute information corresponding to the current time of each candidate image recognition algorithm.
In this embodiment, the target image recognition algorithm is one or more candidate image recognition algorithms, and may be set according to at least one of requirements of video description, hardware performance, or time consumption requirements in practical application.
Optionally, in order to reduce the time consumption of image processing, the number of the target image recognition algorithms may be set to 1, that is, only one target image recognition algorithm is determined each time, so that only one target image recognition algorithm is subsequently used to perform corresponding image processing on the image frame to be processed.
And step S102, carrying out image processing on the image frame to be processed in the video by using a target image recognition algorithm.
In this embodiment, the video includes a plurality of consecutive image frames, and the image frame to be processed is an image frame that has not been subjected to image processing by using any candidate image recognition algorithm in the video. The image frame to be processed may be one image frame or a plurality of continuous image frames, and this embodiment is not limited in this embodiment.
In this embodiment, the target image recognition algorithm performs image processing on at least one to-be-processed image frame adjacent to the processed image frame, and when the to-be-processed image frame is processed, the to-be-processed image frames are sequentially processed according to the arrangement sequence of the to-be-processed image frames in the video.
Optionally, since the candidate image recognition algorithms are usually selected according to the requirement of video processing, in order to avoid that the time interval between two execution times of each candidate image recognition algorithm is too long to affect the final video processing effect, the target image recognition algorithm may perform image processing on only one to-be-processed image frame in the video in step S102.
As can be seen from the above embodiments of the present invention, in the embodiments of the present invention, first, a target image recognition algorithm is determined from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms; and then, carrying out image processing on the image frames to be processed in the video by using a target image recognition algorithm. Therefore, the embodiment of the invention not only can use various image recognition algorithms to carry out corresponding image processing on the image frames in the video and meet diversified video processing or description requirements, but also can reduce the image processing time and meet the requirement of carrying out real-time processing in the video shooting process because the image frames to be processed are only subjected to image processing by using the target image recognition algorithm.
Example two
A second embodiment of the present application provides a video processing method, as shown in fig. 2, fig. 2 is a schematic flowchart of the video processing method provided in the second embodiment of the present application, and the method includes the following steps:
step S201, according to the execution time information corresponding to the candidate image recognition algorithm, a plurality of candidate image recognition algorithms are sorted, so that the target image recognition algorithm is determined according to the sorting result.
In this embodiment, since the candidate image recognition algorithms are selected according to the video processing requirement, if the interval value between the two execution times of each candidate image recognition algorithm is too long, a large error may occur in the image processing result, and therefore, in order to obtain a better video processing effect, the latest execution time of each candidate image recognition algorithm needs to be considered when determining the target image recognition algorithm. Specifically, the priority calculation information may include execution time information for identifying a latest execution time of the candidate image recognition algorithms, so that the plurality of candidate image recognition algorithms may be sorted according to the latest execution time corresponding to each candidate image recognition algorithm, so as to determine the target image recognition algorithm according to the sorting result.
For example, all candidate image recognition algorithms may be sorted in a reverse order according to an interval value between a last execution time corresponding to each candidate image recognition algorithm and a current time, and at least one candidate image recognition algorithm arranged in the front may be preferentially selected as the target image recognition algorithm.
The identification mode of the last execution time of the candidate image recognition algorithm in the execution time information is not limited. For example, a timestamp corresponding to the time at which the candidate image recognition algorithm was most recently executed may be used for identification; the time difference between the time of the last execution of the candidate image recognition algorithm and the current time can also be used for identification; the identification can also be performed by using one of the sequence number of the image frame corresponding to the latest execution of the candidate image recognition algorithm in the video, the number of image frames spaced between the image frame corresponding to the latest execution of the candidate image recognition algorithm and the image frame to be processed, and the timestamp of the image frame corresponding to the latest execution of the candidate image recognition algorithm.
Optionally, considering that the number of image frames spaced between two image frames processed by different candidate image recognition algorithms is different, for example, under the condition that the spacing values of two times of operation time are the same, the processing result errors of different candidate image recognition algorithms are different, so on the basis of ensuring the image processing result to be reliable, in order to further improve the image processing effect, the longest operation time interval value allowed by the candidate image recognition algorithm needs to be considered when determining the target image recognition algorithm. In particular, the priority calculation information may further include interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm.
The identification manner of the longest operation time interval value allowed by the candidate image recognition algorithm in the interval time information is not limited, and for example, the identification manner may be performed by using a time value such as 0.1 second, 1 second, or the like, or the identification manner may be performed by using the number of image frames of the interval between two image frames processed by the candidate image recognition algorithm.
Correspondingly, when the latest execution time of the candidate image recognition algorithm and the allowed longest operation time interval value are considered at the same time, step S201 further includes: and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
Optionally, when sequentially processing consecutive image frames in the video, for example, when processing the video in real time during the shooting process, in order to improve the video processing efficiency, step S201 may further include:
in the sub-step S201a, according to the execution time information corresponding to the candidate image recognition algorithm, an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm is obtained.
The identification mode of the current time and the idle time interval value is not limited, and the current time and the idle time interval value can be identified by a numerical value representing time or a numerical value representing the number of image frames, but the current time and the last execution time of the candidate image recognition algorithm have the same identification mode.
For example, when execution time information corresponding to a candidate image recognition algorithm is identified using a timestamp corresponding to the time at which the candidate image recognition algorithm was most recently executed, the idle time interval value may be identified with a time value such as 0.1 second, 0.5 second, 1 second; when the execution time information corresponding to the candidate image recognition algorithm is identified using the sequence number of the image frame in the video corresponding to the last execution of the candidate image recognition algorithm, the idle time interval value may be identified by an image frame number value such as 1, 2, 3.
Sub-step S201b, sorting the plurality of candidate image recognition algorithms according to a quotient of the idle time interval value and the longest operating time interval value.
Wherein, when the quotient of the idle time interval value and the longest operation time interval value is larger, it indicates that the idle time interval value of the candidate image recognition algorithm is closer to the allowed longest operation time interval value, and the priority of the candidate image recognition algorithm execution needs to be higher, so by calculating the quotient of the idle time interval value and the longest operation time interval value, the candidate image recognition algorithm whose idle time interval value is closer to the allowed longest operation time interval value can be ranked in the front column and preferentially determined as the target image recognition algorithm.
Optionally, in order to preferentially determine the image recognition algorithm with higher importance as the target image recognition algorithm, the priority calculation information may further include weight information for identifying the importance of the candidate image recognition algorithm, considering that the frequency requirement or the error requirement of the result may be different for different image recognition algorithms when performing video processing.
Correspondingly, when the latest execution time of the candidate image recognition algorithm and the importance degree of the candidate image recognition algorithm are considered, step S201 further includes: and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
Alternatively, the importance of each candidate image recognition algorithm may be set by a preset algorithm or manually, that is, the weight information may include a weight coefficient. Correspondingly, step S201 may further include:
in the sub-step S201c, according to the execution time information corresponding to the candidate image recognition algorithm, an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm is obtained.
The sub-step S201c is the same as the sub-step S201a, and has corresponding advantages, which are not described herein again.
In sub-step S201d, the candidate image recognition algorithms are ranked according to the product of the idle time interval value and the weight coefficient.
By calculating the product value of the idle time interval value and the weight coefficient, candidate image recognition algorithms with higher importance and longer idle time interval value can be ranked in the front row and preferentially determined as the target image recognition algorithm.
Optionally, in order to comprehensively consider the latest execution time of the candidate image recognition algorithm, the importance of the candidate image recognition algorithm, and the longest operation time interval value allowed by the candidate image recognition algorithm, the priority calculation information further includes interval time information for identifying the longest operation time interval value allowed by the candidate image recognition algorithm. Correspondingly, the sub-step S201d includes: and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
If "cur _ time" represents the current time, and "last _ time" represents the latest execution time of a candidate image recognition algorithm, and "interv" represents the longest operation time interval value allowed by the candidate image recognition algorithm, and "weight" represents the weight coefficient, the formula corresponding to the quotient of the product value and the longest operation time interval value in the sub-step S201d is "cur _ time-last _ time)" weight/interv ". According to the formula, the corresponding value of each candidate image recognition algorithm can be calculated, so that the candidate image recognition algorithms are sorted.
And step S202, using a target image recognition algorithm to perform image processing on the image frames to be processed in the video.
In this embodiment, the implementation content of step S202 is the same as that of step S102 in the first embodiment, and has corresponding beneficial effects, which are not described herein again.
As can be seen from the above embodiments of the present invention, when determining a target image recognition algorithm, the embodiment of the present invention preferentially considers the latest execution time of each candidate image recognition algorithm, and/or the corresponding importance degree and/or the allowed longest operation time interval value of each candidate image recognition algorithm, so that not only the real-time performance of video processing can be satisfied, but also the error of the video processing result can be reduced, and the reliability of image frame processing in the video can be ensured.
EXAMPLE III
A third embodiment of the present application provides a video processing method, as shown in fig. 3, fig. 3 is a schematic flowchart of the video processing method provided in the third embodiment of the present application, and the method includes the following steps:
step S301, determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms.
In this embodiment, the implementation content of step S301 is the same as step S101 in the first embodiment, or the same as step S201 in the second embodiment, and has corresponding beneficial effects, which are not described herein again.
Step S302, image processing is carried out on the image frames to be processed in the video by using a target image recognition algorithm.
In this embodiment, the implementation content of step S302 is the same as step S102 in the first embodiment, or the same as step S202 in the second embodiment, and has corresponding beneficial effects, which are not described herein again.
Step S303, updating the priority calculation information corresponding to the candidate image recognition algorithm and updating the range of the image frame to be processed.
In this embodiment, after the target image recognition algorithm is executed to perform image processing on the image frames to be processed, as time goes by, the relevant information corresponding to the current time of each candidate image recognition algorithm changes, that is, the priority calculation information corresponding to the candidate image recognition algorithm may change, and the image frames to be processed also become processed image frames, so in order to process consecutive image frames, it is necessary to update the priority calculation information corresponding to the image recognition algorithm and update the range of the image frames to be processed.
Optionally, in order to implement processing of consecutive image frames, after one or more image frames to be processed are subjected to image processing by using the target image recognition algorithm, the state of the image frame to be processed is converted from unprocessed to processed, so that at least one image frame adjacent to the image frame to be processed, which is not yet processed, may be determined as a new image frame to be processed.
Optionally, if the number of the image frames to be processed is 1, after one image frame to be processed is image-processed by using the target image recognition algorithm in step S302, one image frame following the image frame to be processed may be determined as a new image frame to be processed in step S303, so that not only consecutive image frames may be processed, but also a time interval between two execution times of each candidate image recognition algorithm may be shortened.
As can be seen from the above embodiments of the present invention, the embodiment of the present invention updates the priority calculation information corresponding to the image recognition algorithm in real time, and updates the range of the image frame to be processed, so that the processing of the continuous image frame can be realized, and the real-time performance of the video processing is ensured.
Example four
As shown in fig. 4, fig. 4 is a video processing apparatus 40 according to a fourth embodiment of the present application, including: the tracking system comprises a memory 401, a processor 402 and a video collector 403, wherein the video collector 403 is used for collecting a target to be tracked in a target area; the memory 401 is used for storing program codes; the processor 402, invoking the program code, when executed, is configured to:
determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
In one embodiment, the priority calculation information includes execution time information identifying a last execution time of the candidate image recognition algorithm; correspondingly, the determining a target image recognition algorithm from the plurality of candidate image recognition algorithms according to the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms, so as to determine the target image recognition algorithm according to a sequencing result.
In one embodiment, the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
In one embodiment, the sorting the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms comprises:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sequencing the candidate image recognition algorithms according to the quotient of the idle time interval value and the longest operation time interval value.
In one embodiment, the priority calculation information further includes weight information for identifying the importance of the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
In one embodiment, the weight information includes a weight coefficient; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient.
In one embodiment, the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient comprises:
and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
In one embodiment, the number of target image recognition algorithms is 1.
In one embodiment, after the image processing the image frames to be processed in the video by using the target image recognition algorithm, the method further comprises:
and updating the priority calculation information corresponding to the candidate image identification algorithm and the range of the image frame to be processed.
In one embodiment, the number of the image frames to be processed is 1, and the updating the range of the image frames to be processed includes:
and determining an image frame after the image frame to be processed as a new image frame to be processed.
EXAMPLE five
In one embodiment, a handheld camera includes the video processing apparatus described in the fourth embodiment, and further includes: the carrier is fixedly connected with the video collector and used for carrying at least one part of the video collector.
In one embodiment, the carrier includes, but is not limited to, a handheld pan and tilt head.
In one embodiment, the handheld pan/tilt head is a handheld tri-axial pan/tilt head.
In one embodiment, the video collector includes, but is not limited to, a camera for a handheld three-axis pan-tilt head.
The basic structure of the handheld pan/tilt camera will be briefly described below.
As shown in fig. 5, the handheld tripod head 1 according to the embodiment of the present invention includes: the camera system comprises a handle 11 and a shooting device 12 loaded on the handle 11, wherein in the embodiment, the shooting device 12 can comprise a three-axis pan-tilt camera, and in other embodiments, the shooting device comprises two or more than three axis pan-tilt cameras.
The handle 11 is provided with a display 13 for displaying the contents of the camera 12. The present invention does not limit the type of the display 13.
Through setting up display screen 13 at the handle 11 of handheld cloud platform 1, this display screen can show the shooting content of taking device 12 to realize that the user can browse the picture or the video that taking device 12 was taken through this display screen 13 fast, thereby improve handheld cloud platform 1 and user's interactivity and interest, satisfy user's diversified demand.
In one embodiment, the handle 11 is further provided with an operation function portion for controlling the photographing device 12, and by operating the operation function portion, it is possible to control the operation of the photographing device 12, for example, to control the on and off of the photographing device 12, to control the photographing of the photographing device 12, to control the posture change of the pan-tilt portion of the photographing device 12, and the like, so as to facilitate the user to quickly operate the photographing device 12. The operation function part can be in the form of a key, a knob or a touch screen.
In one embodiment, the operation function portion includes a shooting button 14 for controlling the shooting of the shooting device 12, a power/function button 15 for controlling the on/off and other functions of the shooting device 12, and a universal key 16 for controlling the movement of the pan/tilt head. Of course, the operation function portion may further include other control keys, such as an image storage key, an image playing control key, and the like, which may be set according to actual requirements.
In one embodiment, the operation function portion and the display 13 are disposed on the same surface of the handle 11, and the operation function portion and the display 13 shown in fig. 5 are disposed on the front surface of the handle 11, so as to meet the ergonomics and make the overall appearance layout of the handheld tripod head 1 more reasonable and beautiful.
Further, the side of the handle 11 is provided with a function operating key a for facilitating the user to quickly and intelligently form a piece by one key. When the camera is started, the orange side key on the right side of the camera body is clicked to start the function, a video is automatically shot at intervals, N sections (N is more than or equal to 2) are shot totally, after a mobile device such as a mobile phone is connected, the function of 'one-key film forming' is selected, the shooting sections are intelligently screened by the system and matched with a proper template, and wonderful works are quickly generated.
In an alternative embodiment, the handle 11 is also provided with a latching groove 17 for the insertion of a memory element. In this embodiment, the card slot 17 is provided on a side surface of the handle 11 adjacent to the display 13, and the image captured by the imaging device 12 can be stored in the memory card by inserting the memory card into the card slot 17. In addition, the card slot 17 is arranged on the side part, so that the use of other functions is not influenced, and the user experience is better.
In one embodiment, a power supply battery for supplying power to the handle 11 and the camera 12 may be disposed inside the handle 11. The power supply battery can adopt a lithium battery, and has large capacity and small volume so as to realize the miniaturization design of the handheld cloud deck 1.
In one embodiment, the handle 11 is further provided with a charging/USB interface 18. In this embodiment, the charging interface/USB interface 18 is disposed at the bottom of the handle 11, so as to facilitate connection with an external power source or a storage device, thereby charging the power supply battery or performing data transmission.
In one embodiment, the handle 11 is further provided with a sound pickup hole 19 for receiving an audio signal, and a microphone is communicated with the interior of the sound pickup hole 19. Pickup hole 19 may include one or more. An indicator light 20 for displaying status is also included. The user may interact audibly with the display screen 13 through the sound pickup hole 19. In addition, the indicator light 20 can reach the warning effect, and the user can obtain the electric quantity condition and the current executive function condition of handheld cloud platform 1 through the indicator light 20. In addition, the sound collecting hole 19 and the indicator light 20 can be arranged on the front surface of the handle 11, so that the use habit and the operation convenience of a user are better met.
In one embodiment, the camera 12 includes a pan-tilt support and a camera mounted on the pan-tilt support. The camera may be a camera, or may be an image pickup element composed of a lens and an image sensor (such as a CMOS or CCD), and may be specifically selected as needed. The camera may be integrated on a pan-tilt stand, so that the camera 12 is a pan-tilt camera; the camera can also be an external shooting device which can be detachably connected or clamped and carried on the tripod head bracket.
In one embodiment, the pan/tilt support is a three-axis pan/tilt support and the camera 12 is a three-axis pan/tilt camera. The three-axis pan-tilt support comprises a yaw shaft assembly 22, a transverse rolling shaft assembly 23 movably connected with the yaw shaft assembly 22, and a pitch shaft assembly 24 movably connected with the transverse rolling shaft assembly 23, and the shooting device is carried on the pitch shaft assembly 24. The yaw shaft assembly 22 drives the camera 12 to rotate in the yaw direction. Of course, in other examples, the holder may also be a two-axis holder, a four-axis holder, or the like, which may be specifically selected as needed.
In one embodiment, a mounting portion is provided at one end of the connecting arm connected to the yaw axle assembly, and a yaw axle assembly may be provided in the handle, the yaw axle assembly driving the camera 12 to rotate in the yaw direction.
In an alternative embodiment, as shown in fig. 6, the handle 11 is provided with an adaptor 26 for coupling with a mobile device 2 (such as a mobile phone), and the adaptor 26 is detachably connected with the handle 11. The adaptor 26 protrudes from the side of the handle to connect with the mobile device 2, and when the adaptor 26 is connected with the mobile device 2, the handheld tripod head 1 is butted with the adaptor 26 and is used for being supported at the end of the mobile device 2.
Set up the adaptor 26 that is used for being connected with mobile device 2 at handle 11, and then with handle 11 and mobile device 2 interconnect, handle 11 can regard as a base of mobile device 2, and the user can come together to hold cloud platform 1 and pick up the operation through the other end that grips mobile device 2, connects convenient and fast, and the product aesthetic property is strong. In addition, after the handle 11 is coupled with the mobile device 2 through the adaptor 26, the communication connection between the handheld tripod head 1 and the mobile device 2 can be realized, and data transmission can be performed between the shooting device 12 and the mobile device 2.
In one embodiment, the adaptor 26 is removably attached to the handle 11, i.e., mechanical connection or disconnection between the adaptor 26 and the handle 11 is possible. Further, the adaptor 26 is provided with an electrical contact, and the handle 11 is provided with an electrical contact mating portion that mates with the electrical contact.
In this way, the adapter 26 can be removed from the handle 11 when the handheld head 1 does not need to be connected to the mobile device 2. When the handheld cloud platform 1 needs to be connected with the mobile device 2, the adaptor 26 is mounted on the handle 11, the mechanical connection between the adaptor 26 and the handle 11 is completed, and meanwhile, the electrical connection between the electrical contact part and the electrical contact matching part is guaranteed through the connection between the electrical contact part and the electrical contact matching part, so that data transmission between the shooting device 12 and the mobile device 2 can be achieved through the adaptor 26.
In one embodiment, as shown in fig. 5, a receiving groove 27 is formed on a side portion of the handle 11, and the adaptor 26 is slidably engaged in the receiving groove 27. When the adaptor 26 is received in the receiving slot 27, a portion of the adaptor 26 protrudes from the receiving slot 27, and a portion of the adaptor 26 protruding from the receiving slot 27 is used for connecting with the mobile device 2.
In one embodiment, referring to fig. 5, when the adaptor 26 is assembled into the receiving groove 27 from the adaptor 26, the adaptor is flush with the receiving groove 27, so that the adaptor 26 is received in the receiving groove 27 of the handle 11.
Therefore, when the handheld tripod head 1 needs to be connected with the mobile device 2, the adaptor 26 can be inserted into the accommodating groove 27 from the adaptor part, so that the adaptor 26 protrudes out of the accommodating groove 27, and the mobile device 2 and the handle 11 can be connected with each other.
After the mobile device 2 is used or when the mobile device 2 needs to be pulled out, the adaptor 26 may be taken out from the receiving groove 27 of the handle 11, and then put into the receiving groove 27 from the adaptor 26 in the reverse direction, so that the adaptor 26 may be received in the handle 11. The adaptor 26 is flush with the receiving groove 27 of the handle 11, so that when the adaptor 26 is received in the handle 11, the surface of the handle 11 is smooth, and the adaptor 26 is more convenient to carry when received in the handle 11.
In one embodiment, the receiving groove 27 is semi-open and is formed on one side surface of the handle 11, so that the adaptor 26 can be more easily slidably engaged with the receiving groove 27. Of course, in other examples, the adaptor 26 may be detachably connected to the receiving slot 27 of the handle 11 by a snap connection, a plug connection, or the like.
In one embodiment, the receiving slot 27 is formed on the side of the handle 11, and the cover 28 is clamped to cover the receiving slot 27 when the switch function is not used, so that the user can operate the switch conveniently without affecting the overall appearance of the front and side of the handle.
In one embodiment, the electrical contact and the electrical contact mating portion may be electrically connected by contact. For example, the electrical contact may be selected as a pogo pin, an electrical plug interface, or an electrical contact. Of course, in other examples, the electrical contact portion and the electrical contact mating portion may be directly connected by surface-to-surface contact.
A1, a video processing method, comprising:
determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
A2, the video processing method according to a1, wherein the priority calculation information includes execution time information for identifying a latest execution time of the candidate image recognition algorithm; correspondingly, the determining a target image recognition algorithm from the plurality of candidate image recognition algorithms according to the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms, so as to determine the target image recognition algorithm according to a sequencing result.
A3, the video processing method according to a2, wherein the priority calculation information further includes interval time information for identifying the longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
A4, the video processing method according to the A3, wherein the sorting the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms comprises:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sequencing the candidate image recognition algorithms according to the quotient of the idle time interval value and the longest operation time interval value.
A5, the video processing method according to a2, wherein the priority calculation information further includes weight information for identifying the degree of importance of the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
A6, the video processing method according to a5, wherein the weight information includes a weight coefficient; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient.
A7, the video processing method according to a6, wherein the priority calculation information further includes interval time information for identifying the longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient comprises:
and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
A8, the video processing method according to A1, wherein the number of target image recognition algorithms is 1.
A9, the method for processing video image according to A1, wherein the image processing of the image frame to be processed in the video by using the target image recognition algorithm further comprises:
updating the priority calculation information corresponding to the candidate image recognition algorithm, and updating the range of the image frame to be processed.
A10, the video processing method according to a9, wherein the number of the image frames to be processed is 1, and the updating the range of the image frames to be processed comprises:
and determining an image frame after the image frame to be processed as a new image frame to be processed.
A11, a video processing apparatus, comprising: the device comprises a memory, a processor and a video collector, wherein the video collector is used for collecting a target to be tracked in a target area; the memory is used for storing program codes; the processor, invoking the program code, when executed, is configured to:
determining a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithms;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
A12, the video processing device according to a11, characterized in that the priority calculation information includes execution time information for identifying a latest execution time of the candidate image recognition algorithm; correspondingly, the determining a target image recognition algorithm from the plurality of candidate image recognition algorithms according to the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms, so as to determine the target image recognition algorithm according to a sequencing result.
A13, the video processing device according to a12, characterized in that the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
A14, the video processing device according to A13, wherein the sorting the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms comprises:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sequencing the candidate image recognition algorithms according to the quotient of the idle time interval value and the longest operation time interval value.
A15, the video processing device according to a12, characterized in that said priority calculation information further includes weight information for identifying the degree of importance of the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithms includes:
and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
A16, the video processing device according to a15, wherein the weight information includes a weight coefficient; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient.
A17, the video processing device according to a16, characterized in that the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient comprises:
and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
A18, the video processing device according to a11, wherein the number of target image recognition algorithms is 1.
A19, the video processing device according to A11, wherein the image processing of the image frames to be processed in the video by using the target image recognition algorithm further comprises:
and updating the priority calculation information corresponding to the candidate image identification algorithm and the range of the image frame to be processed.
A20, the video processing device according to a19, wherein the number of the image frames to be processed is 1, and the updating the range of the image frames to be processed comprises:
and determining an image frame after the image frame to be processed as a new image frame to be processed.
A21, a hand-held camera, comprising a video processing device according to any one of a11-a20, further comprising: the carrier is fixedly connected with the video collector and used for carrying at least one part of the video collector.
A22, the hand-held camera of a21, wherein the carrier comprises but is not limited to a hand-held pan-tilt head.
A23, the handheld camera of a22, wherein the handheld pan/tilt head is a handheld tri-axial pan/tilt head.
The handheld camera of a21, wherein the video collector comprises but is not limited to a camera for a handheld three-axis pan-tilt head.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular transactions or implement particular abstract data types. The application may also be practiced in distributed computing environments where transactions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (22)

1. A video processing method, comprising:
according to the execution time information which is used for identifying the latest execution time of the candidate image recognition algorithm and is included in the priority calculation information corresponding to the candidate image recognition algorithm, sequencing the candidate image recognition algorithms to determine the target image recognition algorithm according to the sequencing result;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
2. The video processing method of claim 1, wherein the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the step of ordering the candidate image recognition algorithms according to the execution time information for identifying the latest execution time of the candidate image recognition algorithms included in the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
3. The video processing method according to claim 2, wherein said sorting the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms comprises:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sequencing the candidate image recognition algorithms according to the quotient of the idle time interval value and the longest operation time interval value.
4. The video processing method according to claim 1, wherein the priority calculation information further includes weight information for identifying the degree of importance of the candidate image recognition algorithm; correspondingly, the step of ordering the candidate image recognition algorithms according to the execution time information for identifying the latest execution time of the candidate image recognition algorithms included in the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
5. The video processing method according to claim 4, wherein the weight information includes a weight coefficient; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient.
6. The video processing method of claim 5, wherein the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient comprises:
and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
7. The video processing method according to claim 1, wherein the number of target image recognition algorithms is 1.
8. The video image processing method according to claim 1, wherein after the image processing the image frames to be processed in the video by using the target image recognition algorithm, the method further comprises:
updating the priority calculation information corresponding to the candidate image recognition algorithm, and updating the range of the image frame to be processed.
9. The video processing method according to claim 8, wherein the number of the image frames to be processed is 1, and the updating the range of the image frames to be processed comprises:
and determining an image frame after the image frame to be processed as a new image frame to be processed.
10. A video processing apparatus, comprising: the device comprises a memory, a processor and a video collector, wherein the video collector is used for collecting a target to be tracked in a target area; the memory is used for storing programs; the processor, invoking the program, when the program is executed, is configured to:
according to the execution time information which is used for identifying the latest execution time of the candidate image recognition algorithm and is included in the priority calculation information corresponding to the candidate image recognition algorithm, sequencing the candidate image recognition algorithms to determine the target image recognition algorithm according to the sequencing result;
and carrying out image processing on the image frames to be processed in the video by using the target image recognition algorithm.
11. The video processing device of claim 10, wherein the priority calculation information further includes interval time information for identifying a longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the step of ordering the candidate image recognition algorithms according to the execution time information for identifying the latest execution time of the candidate image recognition algorithms included in the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms.
12. The video processing device according to claim 11, wherein said sorting the plurality of candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithms comprises:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sequencing the candidate image recognition algorithms according to the quotient of the idle time interval value and the longest operation time interval value.
13. The video processing device according to claim 10, wherein the priority calculation information further includes weight information for identifying the degree of importance of the candidate image recognition algorithm; correspondingly, the step of ordering the candidate image recognition algorithms according to the execution time information for identifying the latest execution time of the candidate image recognition algorithms included in the priority calculation information corresponding to the candidate image recognition algorithms comprises:
and sequencing the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms.
14. The video processing device according to claim 13, wherein the weight information includes a weight coefficient; correspondingly, the sorting the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithms includes:
obtaining an idle time interval value between the current time and the latest execution time of the candidate image recognition algorithm according to the execution time information corresponding to the candidate image recognition algorithm;
and sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient.
15. The video processing device of claim 14, wherein the priority calculation information further includes interval time information for identifying a longest operating time interval value allowed by the candidate image recognition algorithm; correspondingly, the sorting the candidate image recognition algorithms according to the product value of the idle time interval value and the weight coefficient comprises:
and sorting the candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
16. The video processing device according to claim 10, wherein the number of target image recognition algorithms is 1.
17. The video processing device according to claim 10, wherein said image processing the image frames to be processed in the video using the target image recognition algorithm further comprises:
and updating the priority calculation information corresponding to the candidate image identification algorithm and the range of the image frame to be processed.
18. The video processing device according to claim 17, wherein the number of the image frames to be processed is 1, and said updating the range of the image frames to be processed comprises:
and determining an image frame after the image frame to be processed as a new image frame to be processed.
19. A hand-held camera comprising the video processing device according to any one of claims 10-18, characterized by further comprising: the carrier is fixedly connected with the video collector and used for carrying at least one part of the video collector.
20. The hand-held camera of claim 19, wherein the carrier comprises, but is not limited to, a hand-held pan and tilt head.
21. The handheld camera of claim 20, wherein the handheld pan-tilt head is a handheld tri-axial pan-tilt head.
22. The handheld camera of claim 19, wherein the video collector comprises but is not limited to a handheld camera for a three-axis pan-tilt.
CN202010296289.1A 2020-04-15 2020-04-15 Video processing method and device and handheld camera Active CN112052713B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010296289.1A CN112052713B (en) 2020-04-15 2020-04-15 Video processing method and device and handheld camera
PCT/CN2020/099833 WO2021208256A1 (en) 2020-04-15 2020-07-02 Video processing method and apparatus, and handheld camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010296289.1A CN112052713B (en) 2020-04-15 2020-04-15 Video processing method and device and handheld camera

Publications (2)

Publication Number Publication Date
CN112052713A CN112052713A (en) 2020-12-08
CN112052713B true CN112052713B (en) 2022-01-11

Family

ID=73609668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010296289.1A Active CN112052713B (en) 2020-04-15 2020-04-15 Video processing method and device and handheld camera

Country Status (2)

Country Link
CN (1) CN112052713B (en)
WO (1) WO2021208256A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347459B (en) * 2021-06-11 2023-08-11 杭州星犀科技有限公司 Android system-based autonomous audio source switching method and device and computing equipment
CN114219883A (en) * 2021-12-10 2022-03-22 北京字跳网络技术有限公司 Video special effect processing method and device, electronic equipment and program product
CN115909186A (en) * 2022-09-30 2023-04-04 北京瑞莱智慧科技有限公司 Image information identification method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779412A (en) * 2011-05-13 2012-11-14 深圳市新创中天信息科技发展有限公司 Integrated video traffic information detection method and system
CN107516097A (en) * 2017-08-10 2017-12-26 青岛海信电器股份有限公司 TV station symbol recognition method and apparatus
CN107861684A (en) * 2017-11-23 2018-03-30 广州视睿电子科技有限公司 Write recognition methods, device, storage medium and computer equipment
CN108830198A (en) * 2018-05-31 2018-11-16 上海玮舟微电子科技有限公司 Recognition methods, device, equipment and the storage medium of video format
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3622443A4 (en) * 2017-05-08 2021-01-20 Plantsnap, Inc. Systems and methods for electronically identifying plant species

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779412A (en) * 2011-05-13 2012-11-14 深圳市新创中天信息科技发展有限公司 Integrated video traffic information detection method and system
CN107516097A (en) * 2017-08-10 2017-12-26 青岛海信电器股份有限公司 TV station symbol recognition method and apparatus
CN107861684A (en) * 2017-11-23 2018-03-30 广州视睿电子科技有限公司 Write recognition methods, device, storage medium and computer equipment
CN108875519A (en) * 2017-12-19 2018-11-23 北京旷视科技有限公司 Method for checking object, device and system and storage medium
CN108830198A (en) * 2018-05-31 2018-11-16 上海玮舟微电子科技有限公司 Recognition methods, device, equipment and the storage medium of video format

Also Published As

Publication number Publication date
CN112052713A (en) 2020-12-08
WO2021208256A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
CN112052713B (en) Video processing method and device and handheld camera
CN108596976B (en) Method, device and equipment for relocating camera attitude tracking process and storage medium
CN110147805B (en) Image processing method, device, terminal and storage medium
EP3246802A1 (en) Mobile terminal and method for controlling the same
CN111052047B (en) Vein scanning device for automatic gesture and finger recognition
CN110471858B (en) Application program testing method, device and storage medium
CN110413837B (en) Video recommendation method and device
WO2021208253A1 (en) Tracking object determination method and device, and handheld camera
CN110572716B (en) Multimedia data playing method, device and storage medium
CN111539880B (en) Image processing method, device and handheld camera
WO2021208251A1 (en) Face tracking method and face tracking device
CN112052357B (en) Video clip marking method and device and handheld camera
CN112261491B (en) Video time sequence marking method and device, electronic equipment and storage medium
CN110555102A (en) media title recognition method, device and storage medium
CN111589138B (en) Action prediction method, device, equipment and storage medium
CN111836073A (en) Method, device and equipment for determining video definition and storage medium
CN111479061B (en) Tracking state determination method and device and handheld camera
CN111563913B (en) Searching method and device based on tracking target and handheld camera thereof
CN111479063B (en) Holder driving method and device and handheld camera
CN111428158B (en) Method and device for recommending position, electronic equipment and readable storage medium
CN110033502A (en) Video creating method, device, storage medium and electronic equipment
CN111508001A (en) Method and device for retrieving tracking target and handheld camera
CN111539283B (en) Face tracking method and face tracking equipment
CN111524162B (en) Method and device for retrieving tracking target and handheld camera
CN111479062B (en) Target object tracking frame display method and device and handheld camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant