WO2021208256A1 - 一种视频处理方法、设备及手持相机 - Google Patents

一种视频处理方法、设备及手持相机 Download PDF

Info

Publication number
WO2021208256A1
WO2021208256A1 PCT/CN2020/099833 CN2020099833W WO2021208256A1 WO 2021208256 A1 WO2021208256 A1 WO 2021208256A1 CN 2020099833 W CN2020099833 W CN 2020099833W WO 2021208256 A1 WO2021208256 A1 WO 2021208256A1
Authority
WO
WIPO (PCT)
Prior art keywords
image recognition
candidate image
recognition algorithm
execution time
video
Prior art date
Application number
PCT/CN2020/099833
Other languages
English (en)
French (fr)
Inventor
康含玉
梁峰
Original Assignee
上海摩象网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海摩象网络科技有限公司 filed Critical 上海摩象网络科技有限公司
Publication of WO2021208256A1 publication Critical patent/WO2021208256A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the embodiments of the present application relate to the field of image processing technologies, and in particular, to a video processing method, device, and handheld camera.
  • one of the technical problems solved by the embodiments of the present invention is to provide a video processing method, device, and handheld camera to overcome the disadvantage of using multiple algorithms to process video clips in the prior art that takes too much time. .
  • the embodiment of the present application provides a video processing method, including:
  • the target image recognition algorithm is used to perform image processing on the image frames to be processed in the video.
  • the priority calculation information includes execution time information for identifying the last execution time of the candidate image recognition algorithm; correspondingly, the priority calculation information corresponding to the candidate image recognition algorithm is obtained from multiple
  • the target image recognition algorithm in the candidate image recognition algorithm includes:
  • the priority calculation information further includes interval time information used to identify the longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the said candidate image recognition algorithm corresponding to the The execution time information, and sorting the multiple candidate image recognition algorithms includes:
  • the sorting a plurality of candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithm includes:
  • the priority calculation information further includes weight information used to identify the importance of the candidate image recognition algorithm; correspondingly, the execution time information corresponding to the candidate image recognition algorithm is used for multiple
  • the sorting of the candidate image recognition algorithm includes:
  • the weight information includes a weight coefficient; correspondingly, the sorting a plurality of the candidate image recognition algorithms according to the execution time information and the weight information corresponding to the candidate image recognition algorithm includes:
  • the priority calculation information further includes interval time information used to identify the longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the idle time interval value is compared with the weight
  • the product value of the coefficient, and sorting the multiple candidate image recognition algorithms includes:
  • the number of the target image recognition algorithm is one.
  • the method further includes: updating the priority calculation information corresponding to the candidate image recognition algorithm, and updating the to-be-processed image frame The range of the image frame.
  • the number of the image frames to be processed is 1, and the updating the range of the image frames to be processed includes:
  • An image frame after the image frame to be processed is determined as the new image frame to be processed.
  • An embodiment of the present application also provides a video processing device, including: a memory, a processor, and a video collector, the video collector is used to collect a target to be tracked in a target area; the memory is used to store program code; the processing The program code is called, and when the program code is executed, it is used to perform the following operations:
  • the target image recognition algorithm is used to perform image processing on the image frames to be processed in the video.
  • An embodiment of the present application also provides a handheld camera, including the aforementioned video processing device, and further including: a carrier, which is fixedly connected to the video collector and configured to carry at least a part of the video collector.
  • the carrier includes but is not limited to a handheld pan/tilt.
  • the handheld PTZ is a handheld three-axis PTZ.
  • the video capture device includes, but is not limited to, a handheld three-axis pan/tilt camera.
  • the target image recognition algorithm is determined from multiple candidate image recognition algorithms according to the priority calculation information corresponding to the candidate image recognition algorithm; then the target image recognition algorithm is used to perform image processing on the image frames to be processed in the video . Therefore, the embodiment of the present invention can not only use multiple image recognition algorithms to perform corresponding image processing on the image frames in the video to meet diversified video processing or description requirements, but also because the image frames to be processed only use the target image recognition algorithm to perform image processing. Processing reduces the time of image processing and can also meet the needs of real-time processing during video shooting.
  • FIG. 1 is a schematic flowchart of a video processing method provided in Embodiment 1 of this application;
  • FIG. 2 is a schematic flowchart of a video processing method provided in Embodiment 2 of this application;
  • FIG. 3 is a schematic flowchart of a video processing method provided in Embodiment 3 of this application;
  • FIG. 4 is a schematic structural diagram of a video processing device provided in Embodiment 4 of this application.
  • FIG. 5 is a schematic structural diagram of a handheld pan/tilt head provided by Embodiment 5 of the application; FIG. 5
  • FIG. 6 is a schematic structural diagram of a handheld PTZ connected with a mobile phone according to Embodiment 5 of the application;
  • FIG. 7 is a schematic structural diagram of a handheld pan/tilt head provided in Embodiment 5 of this application.
  • FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of the present application, including the following steps:
  • Step S101 Determine a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithm.
  • the candidate image recognition algorithm is used to recognize image frames in the video and obtain corresponding description information.
  • the objects recognized by different candidate image recognition algorithms, the description information generated, the time spent, and the required processor resources may all be different.
  • the specific types and quantities of candidate image recognition algorithms are different. In actual applications, you can choose according to the requirements of the video description.
  • the priority calculation information is used to identify the priority of using multiple candidate image recognition algorithms. This embodiment does not limit the calculation, identification, recording format, etc. of the priority calculation information.
  • priority calculation information can score or rank all candidate image recognition algorithms according to a preset priority calculation model, and use numbers or words to identify the priority of use of multiple candidate image recognition algorithms.
  • the attribute information corresponding to the candidate image recognition algorithm will change.
  • the candidate image can be obtained according to the attribute information corresponding to the current time of each candidate image recognition algorithm Identify the priority calculation information corresponding to the algorithm.
  • the target image recognition algorithm is one or more of candidate image recognition algorithms, which can be set according to at least one of video description requirements, hardware performance, or time-consuming requirements in actual applications.
  • the number of target image recognition algorithms can be set to 1, that is, only one target image recognition algorithm is determined at a time, so that only one target image recognition algorithm is subsequently used to respond to the image frame to be processed.
  • Image processing can be set to 1, that is, only one target image recognition algorithm is determined at a time, so that only one target image recognition algorithm is subsequently used to respond to the image frame to be processed.
  • Step S102 Use the target image recognition algorithm to perform image processing on the image frame to be processed in the video.
  • the video includes multiple consecutive image frames
  • the image frame to be processed is an image frame in the video that has not been processed by any candidate image recognition algorithm.
  • the image frame to be processed may be one image frame or multiple consecutive image frames, which is not specifically limited in this embodiment.
  • the target image recognition algorithm performs image processing on at least one to-be-processed image frame adjacent to the processed image frame, and when the to-be-processed image frame is processed, it will be based on the arrangement order of the to-be-processed image frames in the video , Process successive image frames to be processed in sequence.
  • the candidate image recognition algorithm is usually selected according to the needs of video processing, in order to avoid that the time interval between the two execution times of each candidate image recognition algorithm is too long to affect the final video processing effect, the target image is recognized in step S102
  • the algorithm can perform image processing on only one image frame to be processed in the video.
  • the embodiment of the present invention first determines the target image recognition algorithm from multiple candidate image recognition algorithms based on the priority calculation information corresponding to the candidate image recognition algorithm; then uses the target image recognition algorithm to determine the target image recognition algorithm in the video. Process image frames for image processing. Therefore, the embodiment of the present invention can not only use multiple image recognition algorithms to perform corresponding image processing on the image frames in the video to meet diversified video processing or description requirements, but also because the image frames to be processed only use the target image recognition algorithm to perform image processing. Processing reduces the time for image processing and can also meet the needs of real-time processing during video shooting.
  • FIG. 2 is a schematic flowchart of a video processing method provided by an embodiment of the present application, and includes the following steps:
  • Step S201 Sort a plurality of candidate image recognition algorithms according to the execution time information corresponding to the candidate image recognition algorithm, so as to determine the target image recognition algorithm according to the sorting result.
  • the candidate image recognition algorithm is selected according to the video processing requirements, if the interval between the two execution times of each candidate image recognition algorithm is too long, it may cause large errors in the image processing results.
  • the most recent execution time of each candidate image recognition algorithm needs to be considered.
  • the priority calculation information may include execution time information for identifying the last execution time of the candidate image recognition algorithm, so that multiple candidate image recognition algorithms can be sorted according to the last execution time corresponding to each candidate image recognition algorithm , To determine the target image recognition algorithm based on the sorting result.
  • all the candidate image recognition algorithms can be sorted in reverse order, and at least one candidate image recognition algorithm in the top row can be selected as the target image recognition algorithm.
  • the time stamp corresponding to the last execution time of the candidate image recognition algorithm can be used for identification; the time difference between the last execution time of the candidate image recognition algorithm and the current time can also be used for identification; the latest candidate image recognition algorithm can also be used for identification.
  • the requirements for the number of image frames in the interval between the two image frames are all different.
  • the interval value of the two operation time is the same, different candidates The error of the processing result of the image recognition algorithm will be different. Therefore, on the basis of ensuring the reliability of the image processing result, in order to further improve the image processing effect, the maximum operating time allowed by the candidate image recognition algorithm should also be considered when determining the target image recognition algorithm.
  • the priority calculation information may also include interval time information for identifying the longest operation interval value allowed by the candidate image recognition algorithm.
  • the identification method of the longest operation time interval value allowed by the candidate image recognition algorithm in the interval time information is not limited.
  • time values such as 0.1 second and 1 second can be used for identification, or the candidate image recognition algorithm can be used for processing.
  • the number of image frames between the two image frames is identified.
  • step S201 further includes: according to the execution time information and the interval time information corresponding to the candidate image recognition algorithm, Candidate image recognition algorithms are sorted.
  • step S201 may further include:
  • sub-step S201a the idle time interval value between the current time and the last execution time of the candidate image recognition algorithm is obtained according to the execution time information corresponding to the candidate image recognition algorithm.
  • the identification method of the current time and the idle time interval value there is no limit to the identification method of the current time and the idle time interval value. It can be identified by a numerical value representing time or a numerical value representing the number of image frames, but the current time and the latest execution time of the candidate image recognition algorithm The identification method is the same.
  • the idle time interval value can be measured with time values such as 0.1 second, 0.5 second, and 1 second. Identification; when the execution time information corresponding to the candidate image recognition algorithm is identified by the sequence number of the corresponding image frame in the video when the candidate image recognition algorithm is executed last time, the idle time interval value can be the number of image frames such as 1, 2, 3 To identify.
  • sub-step S201b a plurality of candidate image recognition algorithms are sorted according to the quotient of the idle time interval value and the longest operation time interval value.
  • the candidate image recognition algorithm execution priority when the quotient of the idle time interval value and the longest operation time interval value is larger, it indicates that the idle time interval value of the candidate image recognition algorithm is closer to the allowable maximum operation time interval value, and the candidate image recognition algorithm execution priority The level needs to be higher, so by calculating the quotient of the idle time interval value and the longest operation time interval value, the candidate image recognition algorithm with the idle time interval value closer to the allowable maximum operation time interval value can be ranked in the forefront. Determined as the target image recognition algorithm.
  • the frequency of use of different image recognition algorithms or the error requirements of the results may be different, so in order to prioritize the more important image recognition algorithm as the target image recognition algorithm
  • the priority calculation information may also include weight information used to identify the importance of the candidate image recognition algorithm.
  • step S201 further includes: according to the execution time information and weight information corresponding to the candidate image recognition algorithm, the multiple candidate image recognition algorithms are identified Sort.
  • step S201 may also include:
  • sub-step S201c the idle time interval value between the current time and the last execution time of the candidate image recognition algorithm is obtained according to the execution time information corresponding to the candidate image recognition algorithm.
  • the sub-step S201c has the same implementation content as the aforementioned sub-step S201a, and has corresponding beneficial effects, which will not be repeated here.
  • sub-step S201d a plurality of candidate image recognition algorithms are sorted according to the product value of the idle time interval value and the weight coefficient.
  • the candidate image recognition algorithm with higher importance and longer idle time interval value can be ranked in the forefront, and the target image recognition algorithm can be determined first.
  • the priority calculation information also includes the identification of the candidate image recognition algorithm Interval time information of the maximum allowed operation interval value.
  • the sub-step S201d includes: sorting the multiple candidate image recognition algorithms according to the quotient of the product value and the longest operation time interval value.
  • cur_time represents the current time
  • last_time represents the last execution time of a candidate image recognition algorithm
  • interv represents the longest operation interval value allowed by the candidate image recognition algorithm
  • weight represents the weight coefficient.
  • Step S202 Use the target image recognition algorithm to perform image processing on the to-be-processed image frames in the video.
  • step S202 the implementation content of step S202 is the same as step S102 in the first embodiment, and has corresponding beneficial effects, which will not be repeated here.
  • FIG. 3 is a schematic flowchart of a video processing method provided by an embodiment of the present application, including the following steps:
  • Step S301 Determine a target image recognition algorithm from a plurality of candidate image recognition algorithms according to priority calculation information corresponding to the candidate image recognition algorithm.
  • step S301 is the same as step S101 in the first embodiment, or the same as step S201 in the second embodiment, and has corresponding beneficial effects, which will not be repeated here.
  • Step S302 Use the target image recognition algorithm to perform image processing on the image frame to be processed in the video.
  • step S302 is the same as step S102 in the first embodiment, or the same as step S202 in the second embodiment, and has corresponding beneficial effects, which will not be repeated here.
  • Step S303 Update the priority calculation information corresponding to the candidate image recognition algorithm, and update the range of the image frame to be processed.
  • the relevant information corresponding to the current time of each candidate image recognition algorithm will change, that is, the corresponding information of the candidate image recognition algorithm will change.
  • Priority calculation information may change, and the image frame to be processed will also become a processed image frame. Therefore, in order to process continuous image frames, it is necessary to update the priority calculation information corresponding to the image recognition algorithm and update the image frame to be processed. Scope.
  • the state of the image frames to be processed is converted from unprocessed to processed, so the At least one adjacent image frame that has not yet been processed is determined as a new image frame to be processed.
  • an image after the to-be-processed image frame may be processed in step S303.
  • the frame is determined as a new image frame to be processed, so that not only continuous image frames can be processed, but also the time interval between two execution times of each candidate image recognition algorithm can be shortened.
  • the embodiments of the present invention update the priority calculation information corresponding to the image recognition algorithm in real time and update the range of the image frames to be processed, which can realize the processing of continuous image frames and ensure the real-time performance of video processing. .
  • FIG. 4 is a video processing device 40 provided in the fourth embodiment of the application, including: a memory 401, a processor 402, and a video collector 403. Tracking target; the memory 401 is used to store program code; the processor 402 calls the program code, and when the program code is executed, it is used to perform the following operations:
  • the target image recognition algorithm is used to perform image processing on the image frames to be processed in the video.
  • the priority calculation information includes execution time information for identifying the latest execution time of the candidate image recognition algorithm; correspondingly, the priority calculation information corresponding to the candidate image recognition algorithm is selected from multiple One of the candidate image recognition algorithms to determine the target image recognition algorithm includes:
  • the priority calculation information further includes interval time information for identifying the longest operation time interval value allowed by the candidate image recognition algorithm; correspondingly, the information corresponding to the candidate image recognition algorithm
  • the execution time information, and sorting the multiple candidate image recognition algorithms includes:
  • the sorting the multiple candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithm includes:
  • the priority calculation information further includes weight information for identifying the importance of the candidate image recognition algorithm; correspondingly, the execution time information corresponding to the candidate image recognition algorithm is Sorting the multiple candidate image recognition algorithms includes:
  • the weight information includes weight coefficients; correspondingly, the multiple candidate image recognition algorithms are sorted according to the execution time information and the weight information corresponding to the candidate image recognition algorithm include:
  • the priority calculation information further includes interval time information used to identify the longest operation interval value allowed by the candidate image recognition algorithm; According to the product value of the weight coefficient, sorting the multiple candidate image recognition algorithms includes:
  • the number of the target image recognition algorithm is one.
  • the method further includes:
  • the number of the image frames to be processed is 1, and the updating of the range of the image frames to be processed includes:
  • An image frame after the image frame to be processed is determined as the new image frame to be processed.
  • a handheld camera includes the video processing device described in the fourth embodiment, and further includes: a carrier, the carrier is fixedly connected to the video capture device, and is configured to carry the video capture device. At least part of the device.
  • the carrier includes, but is not limited to, a handheld pan/tilt.
  • the handheld pan/tilt is a handheld three-axis pan/tilt.
  • the video capture device includes, but is not limited to, a handheld three-axis pan-tilt camera.
  • the handheld pan/tilt head 1 of the embodiment of the present invention includes a handle 11 and a photographing device 12 loaded on the handle 11.
  • the photographing device 12 may include a three-axis pan/tilt camera , In other embodiments, it includes a pan-tilt camera with two axes or more than three axes.
  • the handle 11 is provided with a display screen 13 for displaying the shooting content of the shooting device 12.
  • the invention does not limit the type of the display screen 13.
  • the display screen 13 By setting the display screen 13 on the handle 11 of the handheld PTZ 1, the display screen can display the shooting content of the shooting device 12, so that the user can quickly browse the pictures or videos shot by the shooting device 12 through the display screen 13, thereby improving The interaction and fun of the handheld PTZ 1 with the user meets the diverse needs of the user.
  • the handle 11 is further provided with an operating function unit for controlling the camera 12, and by operating the operating function unit, the operation of the camera 12 can be controlled, for example, the opening and closing of the camera 12 can be controlled. Turning off and controlling the shooting of the shooting device 12, controlling the posture change of the pan-tilt part of the shooting device 12, etc., so that the user can quickly operate the shooting device 12.
  • the operation function part may be in the form of a button, a knob or a touch screen.
  • the operating function unit includes a photographing button 14 for controlling the photographing of the photographing device 12, a power/function button 15 for controlling the opening and closing of the photographing device 12 and other functions, as well as controlling the pan/tilt.
  • the universal key 16 may also include other control buttons, such as image storage buttons, image playback control buttons, etc., which can be set according to actual needs.
  • the operation function part and the display screen 13 are arranged on the same side of the handle 11.
  • the operation function part and the display screen 13 shown in FIG. Engineering, and at the same time make the overall appearance and layout of the handheld PTZ 1 more reasonable and beautiful.
  • the side of the handle 11 is provided with a function operation key A, which is used to facilitate the user to quickly and intelligently form a sheet with one key.
  • a function operation key A which is used to facilitate the user to quickly and intelligently form a sheet with one key.
  • the handle 11 is further provided with a card slot 17 for inserting a storage element.
  • the card slot 17 is provided on the side of the handle 11 adjacent to the display screen 13, and a memory card is inserted into the card slot 17 to store the images taken by the camera 12 in the memory card. .
  • arranging the card slot 17 on the side does not affect the use of other functions, and the user experience is better.
  • a power supply battery for supplying power to the handle 11 and the imaging device 12 may be provided inside the handle 11.
  • the power supply battery can be a lithium battery with large capacity and small size to realize the miniaturized design of the handheld pan/tilt 1.
  • the handle 11 is also provided with a charging interface/USB interface 18.
  • the charging interface/USB interface 18 is provided at the bottom of the handle 11 to facilitate connection with an external power source or storage device, so as to charge the power supply battery or perform data transmission.
  • the handle 11 is further provided with a sound pickup hole 19 for receiving audio signals, and the sound pickup hole 19 communicates with a microphone inside.
  • the sound pickup hole 19 may include one or more. It also includes an indicator light 20 for displaying status. The user can realize audio interaction with the display screen 13 through the sound pickup hole 19.
  • the indicator light 20 can serve as a reminder, and the user can obtain the power status of the handheld PTZ 1 and the current execution function status through the indicator light 20.
  • the sound pickup hole 19 and the indicator light 20 can also be arranged on the front of the handle 11, which is more in line with the user's usage habits and operation convenience.
  • the imaging device 12 includes a pan-tilt support and a camera mounted on the pan-tilt support.
  • the imager may be a camera, or an image pickup element composed of a lens and an image sensor (such as CMOS or CCD), etc., which can be specifically selected according to needs.
  • the camera may be integrated on the pan-tilt support, so that the photographing device 12 is a pan-tilt camera; it may also be an external photographing device, which can be detachably connected or clamped to be mounted on the pan-tilt support.
  • the pan/tilt support is a three-axis pan/tilt support
  • the photographing device 12 is a three-axis pan/tilt camera.
  • the three-axis pan/tilt head bracket includes a yaw axis assembly 22, a roll axis assembly 23 movably connected to the yaw axis assembly 22, and a pitch axis assembly 24 movably connected to the roll axis assembly 23.
  • the camera is mounted on the pitch axis assembly 24.
  • the yaw axis assembly 22 drives the camera 12 to rotate in the yaw direction.
  • the pan/tilt support can also be a two-axis pan/tilt, a four-axis pan/tilt, etc., which can be specifically selected according to needs.
  • a mounting portion is further provided, the mounting portion is provided at one end of the connecting arm connected to the roll shaft assembly, and the yaw shaft assembly may be set in the handle, and the yaw shaft assembly drives The camera 12 rotates in the yaw direction together.
  • the handle 11 is provided with an adapter 26 for coupling with a mobile device 2 (such as a mobile phone), and the adapter 26 and the handle 11 can be Disconnect the connection.
  • the adapter 26 protrudes from the side of the handle for connecting to the mobile device 2.
  • the adapter 26 is connected to the mobile device 2, the handheld platform 1 and The adapter 26 is docked and used to be supported at the end of the mobile device 2.
  • the handle 11 is provided with an adapter 26 for connecting with the mobile device 2 to connect the handle 11 and the mobile device 2 to each other.
  • the handle 11 can be used as a base of the mobile device 2.
  • the user can hold the other end of the mobile device 2 Let's pick up and operate the handheld PTZ 1 together, the connection is convenient and fast, and the product is beautiful.
  • a communication connection between the handheld pan-tilt 1 and the mobile device 2 can be realized, and the camera 12 and the mobile device 2 can transmit data.
  • the adapter 26 and the handle 11 are detachably connected, that is, the adapter 26 and the handle 11 can be mechanically connected or removed. Further, the adapter 26 is provided with an electrical contact portion, and the handle 11 is provided with an electrical contact matching portion that matches with the electrical contact portion.
  • the adapter 26 can be removed from the handle 11.
  • the adapter 26 is installed on the handle 11 to complete the mechanical connection between the adapter 26 and the handle 11, and at the same time through the electrical contact part and the electrical contact mating part. The connection ensures the electrical connection between the two, so as to realize the data transmission between the camera 12 and the mobile device 2 through the adapter 26.
  • a receiving groove 27 is provided on the side of the handle 11, and the adapter 26 is slidably clamped in the receiving groove 27. After the adapter 26 is installed in the receiving slot 27, the adapter 26 partially protrudes from the receiving slot 27, and the portion of the adapter 26 protruding from the receiving slot 27 is used to connect with the mobile device 2.
  • the adapter 26 when the adapter 26 is inserted into the receiving groove 27 from the adapter 26, the adapter part is flush with the receiving groove 27, and then The adapter 26 is stored in the receiving groove 27 of the handle 11.
  • the adapter 26 can be inserted into the receiving groove 27 from the adapter part, so that the adapter 26 protrudes from the receiving groove 27, So that the mobile device 2 and the handle 11 are connected to each other
  • the adapter 26 can be taken out of the receiving slot 27 of the handle 11, and then inserted into the receiving slot from the adapter 26 in the reverse direction 27, the adapter 26 is further stored in the handle 11.
  • the adapter 26 is flush with the receiving groove 27 of the handle 11. After the adapter 26 is stored in the handle 11, the surface of the handle 11 can be ensured to be flat, and the adapter 26 is stored in the handle 11 to make it easier to carry.
  • the receiving groove 27 is semi-opened on one side surface of the handle 11, which makes it easier for the adapter 26 to be slidably connected to the receiving groove 27.
  • the adapter 26 can also be detachably connected to the receiving slot 27 of the handle 11 by means of a snap connection, a plug connection, or the like.
  • the receiving groove 27 is provided on the side of the handle 11.
  • the receiving groove 27 is clamped and covered by the cover 28, which is convenient for the user to operate, and does not affect the front and sides of the handle. The overall appearance.
  • the electrical contact part and the electrical contact mating part may be electrically connected in a contact contact manner.
  • the electrical contact portion can be selected as a telescopic probe, can also be selected as an electrical plug-in interface, or can be selected as an electrical contact.
  • the electrical contact portion and the electrical contact mating portion can also be directly connected to each other in a surface-to-surface contact manner.
  • a video processing method characterized in that it comprises:
  • the target image recognition algorithm is used to perform image processing on the image frames to be processed in the video.
  • A2 The video processing method according to A1, wherein the priority calculation information includes execution time information for identifying the last execution time of the candidate image recognition algorithm; correspondingly, the candidate image recognition algorithm
  • Corresponding priority calculation information, determining a target image recognition algorithm from a plurality of candidate image recognition algorithms includes:
  • the video processing method wherein the priority calculation information further includes interval time information for identifying the longest operation interval value allowed by the candidate image recognition algorithm; correspondingly, the According to the execution time information corresponding to the candidate image recognition algorithm, sorting the multiple candidate image recognition algorithms includes:
  • A4 The video processing method according to A3, wherein said sorting a plurality of said candidate image recognition algorithms according to said execution time information and said interval time information corresponding to said candidate image recognition algorithm comprises :
  • A5. The video processing method according to A2, wherein the priority calculation information further includes weight information used to identify the importance of the candidate image recognition algorithm; correspondingly, the candidate image recognition algorithm Corresponding to the execution time information, sorting the multiple candidate image recognition algorithms includes:
  • A6 The video processing method according to A5, wherein the weight information includes a weight coefficient; correspondingly, the execution time information and the weight information corresponding to the candidate image recognition algorithm are more Sorting of the candidate image recognition algorithms includes:
  • A8 The video processing method according to A1, wherein the number of the target image recognition algorithm is one.
  • the video image processing method characterized in that, after performing image processing on the image frame to be processed in the video by using the target image recognition algorithm, the method further includes:
  • the priority calculation information corresponding to the candidate image recognition algorithm is updated, and the range of the image frame to be processed is updated.
  • A10 The video processing method according to A9, wherein the number of the image frames to be processed is 1, and the update of the range of the image frames to be processed includes:
  • An image frame after the image frame to be processed is determined as the new image frame to be processed.
  • a video processing device comprising: a memory, a processor, and a video capture device, the video capture device is used to capture the target area to be tracked; the memory is used to store program code; the processor , Calling the program code, when the program code is executed, it is used to perform the following operations:
  • the target image recognition algorithm is used to perform image processing on the image frames to be processed in the video.
  • the video processing device includes execution time information for identifying the last execution time of the candidate image recognition algorithm; correspondingly, the candidate image recognition algorithm corresponds to
  • the priority calculation information for determining a target image recognition algorithm from a plurality of candidate image recognition algorithms includes:
  • the video processing device wherein the priority calculation information further includes interval time information for identifying the longest operation interval value allowed by the candidate image recognition algorithm; correspondingly, the basis
  • the execution time information corresponding to the candidate image recognition algorithm, and sorting a plurality of the candidate image recognition algorithms includes:
  • A14 The video processing device according to A13, wherein the sorting a plurality of the candidate image recognition algorithms according to the execution time information and the interval time information corresponding to the candidate image recognition algorithm includes:
  • sorting the multiple candidate image recognition algorithms includes:
  • the video processing device according to A15, wherein the weight information includes a weight coefficient; correspondingly, the execution time information and the weight information corresponding to the candidate image recognition algorithm are
  • the sorting of the candidate image recognition algorithm includes:
  • the video processing device wherein the priority calculation information further includes interval time information for identifying the longest operation interval value allowed by the candidate image recognition algorithm; correspondingly, the basis
  • the product value of the idle time interval value and the weight coefficient, and sorting the plurality of candidate image recognition algorithms includes:
  • A18 The video processing device according to A11, wherein the number of the target image recognition algorithm is 1.
  • the video processing device characterized in that, after performing image processing on the image frame to be processed in the video by using the target image recognition algorithm, the method further includes:
  • A20 The video processing device according to A19, wherein the number of the image frames to be processed is 1, and the update of the range of the image frames to be processed includes:
  • An image frame after the image frame to be processed is determined as the new image frame to be processed.
  • a handheld camera characterized by comprising the video processing device according to any one of A11-A20, characterized by further comprising: a carrier, which is fixedly connected to the video collector, It is used to carry at least a part of the video collector.
  • A22 The handheld camera according to A21, wherein the carrier includes but is not limited to a handheld pan/tilt.
  • A23 The handheld camera according to A22, wherein the handheld pan/tilt is a handheld three-axis pan/tilt.
  • the handheld camera according to A21 wherein the video capture device includes, but is not limited to, a handheld three-axis pan/tilt camera.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow.
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the memory control logic.
  • controllers in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic.
  • the same function can be realized in the form of a microcontroller or the like. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • this application can be provided as a method, a system, or a computer program product. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • This application may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific transactions or implement specific abstract data types.
  • This application can also be practiced in distributed computing environments. In these distributed computing environments, remote processing devices connected through a communication network execute transactions.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种视频处理方法、设备及手持相机,首先根据候选图像识别算法对应的优先级计算信息,从多个候选图像识别算法中确定目标图像识别算法;然后使用目标图像识别算法对视频中的待处理图像帧进行图像处理。因此,本发明实施例不仅可以使用多种图像识别算法对视频中的图像帧进行相应的图像处理,满足多样化的视频处理或者描述需求,而且由于对待处理图像帧仅使用目标图像识别算法进行图像处理,降低了图像处理的时间,也可满足在视频拍摄过程中进行实时处理的需求。

Description

一种视频处理方法、设备及手持相机 技术领域
本申请实施例涉及图像处理技术领域,尤其涉及一种视频处理方法、设备及手持相机。
背景技术
随着视频处理技术的发展,在对视频片段进行处理时,需要对视频片段里包含的物体、场景等对象从多个不同角度进行描述,由于目前尚没有一种算法能够解决所有的视频识别及描述需求,在实际应用中会根据需求使用多种不同的算法对视频片段进行处理,但是如果使用所选择的全部算法对每个图像帧进行处理的话,会带来耗时太多无法满足实时性的问题。
发明内容
有鉴于此,本发明实施例所解决的技术问题之一在于提供一种视频处理方法、设备及手持相机,用以克服现有技术中使用多种算法对视频片段进行处理耗时太多的缺陷。
本申请实施例提供了一种视频处理方法,包括:
根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法;
使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理。
可选的,所述优先级计算信息包括用于标识所述候选图像识别算法最近一次执行时间的执行时间信息;对应的,所述根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法包括:
根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序,以根据排序结果确定所述目标图像识别算法。
可选的,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序。
可选的,所述根据所述候选图像识别算法对应的所述执行时间信息和所述 间隔时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
可选的,所述优先级计算信息还包括用于标识所述候选图像识别算法重要程度的权重信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序。
可选的,所述权重信息包括权重系数;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序。
可选的,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序包括:
根据所述乘积值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
可选的,所述目标图像识别算法的数量为1。
可选的,所述使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理之后还包括:更新所述候选图像识别算法对应的所述优先级计算信息,以及更新所述待处理图像帧的范围。
可选的,所述待处理图像帧的数量为1,所述更新所述待处理图像帧的范围包括:
将所述待处理图像帧之后的一图像帧确定为新的所述待处理图像帧。
本申请实施例还提供了一种视频处理设备,包括:存储器、处理器、视频采集器,所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存 储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法;
使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理。
本申请实施例还提供了一种手持相机,包括前述的视频处理设备,还包括:承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
可选的,所述承载器包括但不限于手持云台。
可选的,所述手持云台为手持三轴云台。
可选的,所述视频采集器包括但不限于手持三轴云台用摄像头。
本申请实施例中,首先根据候选图像识别算法对应的优先级计算信息,从多个候选图像识别算法中确定目标图像识别算法;然后使用目标图像识别算法对视频中的待处理图像帧进行图像处理。因此,本发明实施例不仅可以使用多种图像识别算法对视频中的图像帧进行相应的图像处理,满足多样化的视频处理或者描述需求,而且由于对待处理图像帧仅使用目标图像识别算法进行图像处理,降低了图像处理的时间,也可满足在视频拍摄过程中实时处理的需求。
附图说明
后文将参照附图以示例性而非限制性的方式详细描述本申请实施例的一些具体实施例。附图中相同的附图标记标示了相同或类似的部件或部分。本领域技术人员应该理解,这些附图未必是按比值绘制的。附图中:
图1为本申请实施例一提供的一种视频处理方法的示意性流程图;
图2为本申请实施例二提供的一种视频处理方法的示意性流程图;
图3为本申请实施例三提供的一种视频处理方法的示意性流程图;
图4为本申请实施例四提供的一种视频处理设备的示意性结构图;
图5为本申请实施例五提供的一种手持云台的示意性结构图;
图6为本申请实施例五提供的一种手持云台的与手机连接的示意性结构图;
图7为本申请实施例五提供的一种手持云台的示意性结构图。
具体实施方式
在本发明使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包括一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,本申请说明书以及权利要求书中使用的“第一”“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”或者“一”等类似词语也不表示数量限制,而是表示存在至少一个。
下面结合本发明实施例附图进一步说明本发明实施例具体实现。
实施例一
本申请实施例一提供一种视频处理方法,如图1所示,图1为本申请实施例提供的一种视频处理方法的示意性流程图,包括下述步骤:
步骤S101,根据候选图像识别算法对应的优先级计算信息,从多个候选图像识别算法中确定目标图像识别算法。
本实施例中,候选图像识别算法用于对视频中的图像帧进行识别,并获得对应的描述信息。不同的候选图像识别算法所识别出的对象、所生成的描述信息、所耗费的时间、所需的处理器资源均可能会存在差别,本实施例中对候选图像识别算法的具体种类和数量不限,在实际应用中可根据视频描述的需求进行选择。
例如,如需对目标图像中的物种进行识别生成对所识别出的物种的描述信息的话,可选择用于识别出人、猫、狗等不同物种的图像识别算法;如需对目标图像中的场景进行识别生成对所识别出的场景的描述信息的话,可选择用于识别出晴天、雨天、黑夜等不同场景的图像识别算法;如需对目标图像中的人脸表情进行识别生成对所识别出的人脸表情的描述信息的话,可选择用于识别出微笑、大笑、哭等不同人脸表情的图像识别算法。
本实施例中,优先级计算信息用于标识多个候选图像识别算法的使用优先程度。本实施例对优先级计算信息的计算、标识、记录格式等均不限。
例如,优先级计算信息可根据预设的优先级计算模型,对全部候选图像识别算法进行打分或者评级,并采用数字或者文字对多个候选图像识别算法的使 用优先程度进行标识。
可选的,由于随着时间的流逝,候选图像识别算法对应的属性信息会有所变化,为了保证数据处理的实时性,可根据每个候选图像识别算法当前时间对应的属性信息,获得候选图像识别算法对应的优先级计算信息。
本实施例中,目标图像识别算法为候选图像识别算法中的一种或者多种,在实际应用中可根据视频描述的需求、硬件性能或者耗时要求中的至少其一进行设置。
可选的,为了降低图像处理耗时,可将目标图像识别算法的数量设置为1,即每次仅确定出一个目标图像识别算法,从而后续仅使用一个目标图像识别算法对待处理图像帧进行相应的图像处理。
步骤S102,使用目标图像识别算法对视频中的待处理图像帧进行图像处理。
本实施例中,视频中包括多个连续的图像帧,待处理图像帧为视频中尚未使用任意一个候选图像识别算法进行图像处理的图像帧。待处理图像帧可以是一个图像帧,也可以是连续的多个图像帧,本实施例在此不做具体限定。
本实施例中,目标图像识别算法至少对与已处理图像帧相邻的至少一个待处理图像帧进行图像处理,并且在对待处理图像帧进行处理时,会根据视频中待处理图像帧的排列顺序,对连续的待处理图像帧依次进行处理。
可选的,由于候选图像识别算法通常是根据视频处理的需求进行选择的,为了避免各候选图像识别算法两次执行时间的时间间隔过长影响最终的视频处理效果,在步骤S102中目标图像识别算法可仅对视频中的一个待处理图像帧进行图像处理。
由以上本发明实施例可见,本发明实施例首先根据候选图像识别算法对应的优先级计算信息,从多个候选图像识别算法中确定目标图像识别算法;然后使用目标图像识别算法对视频中的待处理图像帧进行图像处理。因此,本发明实施例不仅可以使用多种图像识别算法对视频中的图像帧进行相应的图像处理,满足多样化的视频处理或者描述需求,而且由于对待处理图像帧仅使用目标图像识别算法进行图像处理,降低了图像处理的时间,也可满足在视频拍摄过程中进行实时处理的需求。
实施例二
本申请实施例二提供一种视频处理方法,如图2所示,图2为本申请实施例提供的一种视频处理方法的示意性流程图,包括下述步骤:
步骤S201,根据候选图像识别算法对应的执行时间信息,对多个候选图像识别算法进行排序,以根据排序结果确定目标图像识别算法。
本实施例中,由于候选图像识别算法是根据视频处理需求进行选择的,每个候选图像识别算法的两次执行时间的间隔值过长的话,可能会导致图像处理结果出现较大误差,因此为了获得较佳的视频处理效果,在确定目标图像识别算法时,需考虑每个候选图像识别算法的最近一次执行时间。具体的,优先级计算信息可包括用于标识候选图像识别算法最近一次执行时间的执行时间信息,从而可根据每个候选图像识别算法对应的最近一次执行时间,对多个候选图像识别算法进行排序,以根据排序结果确定目标图像识别算法。
例如,可根据每个候选图像识别算法对应的最近一次执行时间距离当前时间的间隔值,对全部候选图像识别算法进行倒序排序,优先选取排在前列的至少一个候选图像识别算法作为目标图像识别算法。
其中,执行时间信息中对候选图像识别算法最近一次执行时间的标识方式不限。例如,可使用候选图像识别算法最近一次执行的时间对应的时间戳进行标识;也可使用候选图像识别算法最近一次执行的时间与当前时间的时间差值进行标识;还可使用候选图像识别算法最近一次执行对应的图像帧在视频中的序号、候选图像识别算法最近一次执行对应的图像帧与待处理图像帧之间间隔的图像帧数量、候选图像识别算法最近一次执行对应的图像帧的时间戳中的一种进行标识。
可选的,考虑到不同的候选图像识别算法对其进行处理的两个图像帧之间间隔的图像帧数量要求会所有不同,例如在两次操作时间的间隔值相同的情况下,不同的候选图像识别算法的处理结果误差会有所不同,因此在保证图像处理结果可靠的基础上,为了进一步提高图像处理效果,在确定目标图像识别算法时还需考虑候选图像识别算法允许的最长操作时间间隔值。具体的,优先级计算信息还可包括用于标识候选图像识别算法允许的最长操作时间间隔值的间隔时间信息。
其中,间隔时间信息中对候选图像识别算法允许的最长操作时间间隔值的标识方式不限,例如,可使用诸如0.1秒、1秒等时间值进行标识,也可使用候选图像识别算法进行处理的两个图像帧之间间隔的图像帧数量进行标识。
对应的,当同时考虑候选图像识别算法的最近一次执行时间和所允许的最长操作时间间隔值时,步骤S201还包括:根据候选图像识别算法对应的执行时间信息和间隔时间信息,对多个候选图像识别算法进行排序。
可选的,当对视频中的连续图像帧依次进行处理时,例如在拍摄的过程中对视频进行实时处理时,为了提高视频处理效率,步骤S201还可包括:
子步骤S201a,根据候选图像识别算法对应的执行时间信息,获得当前时间与候选图像识别算法最近一次执行时间之间的空闲时间间隔值。
其中,对当前时间和空闲时间间隔值的标识方式不限,可以用表示时间的数值进行标识,也可以用表示图像帧数量的数值进行标识,但是当前时间与候选图像识别算法最近一次执行时间的标识方式相同。
例如,当候选图像识别算法对应的执行时间信息使用该候选图像识别算法最近一次执行的时间对应的时间戳进行标识时,空闲时间间隔值可以用诸如0.1秒、0.5秒、1秒的时间值进行标识;当候选图像识别算法对应的执行时间信息使用候选图像识别算法最近一次执行对应的图像帧在视频中的序号进行标识时,空闲时间间隔值可以用诸如1、2、3的图像帧数量值进行标识。
子步骤S201b,根据空闲时间间隔值与最长操作时间间隔值的商值,对多个候选图像识别算法进行排序。
其中,当空闲时间间隔值与最长操作时间间隔值的商值越大,则表明候选图像识别算法的空闲时间间隔值更接近允许的最长操作时间间隔值,该候选图像识别算法执行的优先级需要更高,因此通过计算空闲时间间隔值与最长操作时间间隔值的商值,可将空闲时间间隔值更为接近允许的最长操作时间间隔值的候选图像识别算法排在前列,优先确定为目标图像识别算法。
可选的,考虑到在进行视频处理时,可能对不同的图像识别算法的使用频率要求或者结果的误差要求有所不同,因此为了将重要程度较高的图像识别算法优先确定为目标图像识别算法,优先级计算信息还可包括用于标识候选图像识别算法重要程度的权重信息。
对应的,当同时考虑候选图像识别算法的最近一次执行时间和候选图像识别算法重要程度时,步骤S201还包括:根据候选图像识别算法对应的执行时间信息和权重信息,对多个候选图像识别算法进行排序。
可选的,可通过预设算法或者人工的方式设定每个候选图像识别算法的重要程度,即权重信息可包括权重系数。对应的,步骤S201还可包括:
子步骤S201c,根据候选图像识别算法对应的执行时间信息,获得当前时间与候选图像识别算法最近一次执行时间之间的空闲时间间隔值。
其中,子步骤S201c与前述子步骤S201a的实施内容相同,并具有相应的有益效果,在此不再赘述。
子步骤S201d,根据空闲时间间隔值与权重系数的乘积值,对多个候选图像识别算法进行排序。
其中,通过计算空闲时间间隔值与权重系数的乘积值,可将重要程度较高且空闲时间间隔值也较长的候选图像识别算法排在前列,优先确定为目标图像识别算法。
可选的,为了综合考虑候选图像识别算法的最近一次执行时间、候选图像识别算法重要程度以及候选图像识别算法允许的最长操作时间间隔值,优先级计算信息还包括用于标识候选图像识别算法允许的最长操作时间间隔值的间隔时间信息。对应的,子步骤S201d包括:根据乘积值与最长操作时间间隔值的商值,对多个候选图像识别算法进行排序。
其中,若“cur_time”表示当前时间,“last_time”表示某一候选图像识别算法最近一次执行时间,“interv”表示该候选图像识别算法允许的最长操作时间间隔值,“weight”表示权重系数时,则子步骤S201d中的“乘积值与最长操作时间间隔值的商值”对应的公式为“(cur_time-last_time)*weight/interv”。根据该公式可计算每个候选图像识别算法对应的值,从而对多个候选图像识别算法进行排序。
步骤S202,使用目标图像识别算法对视频中的待处理图像帧进行图像处理。
本实施例中,步骤S202的实施内容与实施例一中的步骤S102相同,并具有相应的有益效果,在此不再赘述。
由以上本发明实施例可见,本发明实施例在确定目标图像识别算法时,优先考虑每个候选图像识别算法的最近一次执行时间,和/或,每个候选图像识别算法对应的重要程度和/或允许的最长操作时间间隔值,从而既可满足视频处理的实时性,还可降低视频处理结果的误差,保证视频中图像帧处理的可靠性。
实施例三
本申请实施例三提供一种视频处理方法,如图3所示,图3为本申请实施例提供的一种视频处理方法的示意性流程图,包括下述步骤:
步骤S301,根据候选图像识别算法对应的优先级计算信息,从多个候选图像识别算法中确定目标图像识别算法。
本实施例中,步骤S301的实施内容与实施例一中的步骤S101相同,或者与实施例二中的步骤S201相同,并具有相应的有益效果,在此不再赘述。
步骤S302,使用目标图像识别算法对视频中的待处理图像帧进行图像处理。
本实施例中,步骤S302的实施内容与实施例一中的步骤S102相同,或者与实施例二中的步骤S202相同,并具有相应的有益效果,在此不再赘述。
步骤S303,更新候选图像识别算法对应的优先级计算信息,以及更新待处理图像帧的范围。
本实施例中,由于执行目标图像识别算法对待处理图像帧进行图像处理后,随着时间的推移,每个候选图像识别算法当前时间对应的相关信息均会发生变化,即候选图像识别算法对应的优先级计算信息可能会发生变化,并且待处理图像帧也会成为已处理图像帧,因此为了对连续图像帧进行处理,需要更新图像识别算法对应的优先级计算信息,以及更新待处理图像帧的范围。
可选的,为了实现对连续图像帧进行处理,当使用目标图像识别算法将一个或者多个待处理图像帧进行图像处理后,待处理图像帧的状态从尚未处理转化成已处理,因此可将与之相邻的其他尚未进处理的至少一个图像帧确定为新的待处理图像帧。
可选的,若待处理图像帧的数量为1,当在步骤S302中使用目标图像识别算法将一个待处理图像帧进行图像处理后,在步骤S303中可将该待处理图像帧之后的一个图像帧确定为新的待处理图像帧,从而不仅可对连续图像帧进行处理,还可缩短每个候选图像识别算法两次执行时间之间的时间间隔。
由以上本发明实施例可见,本发明实施例通过实时更新图像识别算法对应的优先级计算信息,以及更新待处理图像帧的范围,可实现对连续图像帧进行处理,保证了视频处理的实时性。
实施例四
如图4所示,图4为本申请实施例四提供的一种视频处理设备40,包括:存储器401、处理器402、视频采集器403,所述视频采集器403用于采集目标区域的待跟踪目标;所述存储器401用于存储程序代码;所述处理器402,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法;
使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理。
在一个实施例中,所述优先级计算信息包括用于标识所述候选图像识别算法最近一次执行时间的执行时间信息;对应的,所述根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法包括:
根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序,以根据排序结果确定所述目标图像识别算法。
在一个实施例中,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序。
在一个实施例中,所述根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
在一个实施例中,所述优先级计算信息还包括用于标识所述候选图像识别算法重要程度的权重信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序。
在一个实施例中,所述权重信息包括权重系数;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序。
在一个实施例中,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序包括:
根据所述乘积值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
在一个实施例中,所述目标图像识别算法的数量为1。
在一个实施例中,所述使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理之后还包括:
更新所述候选图像识别算法对应的所述优先级计算信息,以及所述待处理图像帧的范围。
在一个实施例中,所述待处理图像帧的数量为1,所述更新所述待处理图像帧的范围包括:
将所述待处理图像帧之后的一图像帧确定为新的所述待处理图像帧。
实施例五
在一个实施例中,一种手持相机,包括前述实施例四中所述的视频处理设备,还包括:承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
在一个实施例中,所述承载器包括但不限于手持云台。
在一个实施例中,所述手持云台为手持三轴云台。
在一个实施例中,所述视频采集器包括但不限于手持三轴云台用摄像头。
下面对手持云台相机的基本构造进行简单介绍。
如图5所示,本发明实施例的手持云台1,包括:手柄11和装载于所述手柄11的拍摄装置12,在本实施例中,所述拍摄装置12可以包括三轴云台相机,在其他实施例中包括两轴或三轴以上的云台相机。
所述手柄11设有用于显示所述拍摄装置12的拍摄内容的显示屏13。本发明不对显示屏13的类型进行限定。
通过在手持云台1的手柄11设置显示屏13,该显示屏可以显示拍摄装置12的拍摄内容,以实现用户能够通过该显示屏13快速浏览拍摄装置12所拍摄 的图片或是视频,从而提高手持云台1与用户的互动性及趣味性,满足用户的多样化需求。
在一个实施例中,所述手柄11还设有用于控制所述拍摄装置12的操作功能部,通过操作所述操作功能部,能够控制拍摄装置12的工作,例如,控制拍摄装置12的开启与关闭、控制拍摄装置12的拍摄、控制拍摄装置12云台部分的姿态变化等,以便于用户对拍摄装置12进行快速操作。其中,所述操作功能部可以为按键、旋钮或者触摸屏的形式。
在一个实施例中,操作功能部包括用于控制所述拍摄装置12拍摄的拍摄按键14和用于控制所述拍摄装置12启闭和其他功能的电源/功能按键15,以及控制所述云台移动的万向键16。当然,操作功能部还可以包括其他控制按键,如影像存储按键、影像播放控制按键等等,可以根据实际需求进行设定。
在一个实施例中,所述操作功能部和所述显示屏13设于所述手柄11的同一面,图5中所示操作功能部和显示屏13均设于手柄11的正面,符合人机工程学,同时使整个手持云台1的外观布局更合理美观。
进一步地,所述手柄11的侧面设置有功能操作键A,用于方便用户快速地智能一键成片。摄影机开启时,点按机身右侧橙色侧面键开启功能,则每隔一段时间自动拍摄一段视频,总共拍摄N段(N≥2),连接移动设备例如手机后,选择“一键成片”功能,系统智能筛选拍摄片段并匹配合适模板,快速生成精彩作品。
在一可选的实施方式中,所述手柄11还设有用于插接存储元件的卡槽17。在本实施例中,卡槽17设于所述手柄11上与所述显示屏13相邻的侧面,在卡槽17中插入存储卡,即可将拍摄装置12拍摄的影像存储在存储卡中。并且,将卡槽17设置在侧部,不会影响到其他功能的使用,用户体验较佳。
在一个实施例中,手柄11内部可以设置用于对手柄11及拍摄装置12供电的供电电池。供电电池可以采用锂电池,容量大、体积小,以实现手持云台1的小型化设计。
在一个实施例中,所述手柄11还设有充电接口/USB接口18。在本实施例中,所述充电接口/USB接口18设于所述手柄11的底部,便于连接外部电源或存储装置,从而对所述供电电池进行充电或进行数据传输。
在一个实施例中,所述手柄11还设有用于接收音频信号的拾音孔19,拾音孔19内部联通麦克风。拾音孔19可以包括一个,也可以包括多个。还包括 用于显示状态的指示灯20。用户可以通过拾音孔19与显示屏13实现音频交互。另外,指示灯20可以达到提醒作用,用户可以通过指示灯20获得手持云台1的电量情况和目前执行功能情况。此外,拾音孔19和指示灯20也均可以设于手柄11的正面,更符合用户的使用习惯以及操作便捷性。
在一个实施例中,所述拍摄装置12包括云台支架和搭载于所述云台支架的拍摄器。所述拍摄器可以为相机,也可以为由透镜和图像传感器(如CMOS或CCD)等组成的摄像元件,具体可根据需要选择。所述拍摄器可以集成在云台支架上,从而拍摄装置12为云台相机;也可以为外部拍摄设备,可拆卸地连接或夹持而搭载于云台支架。
在一个实施例中,所述云台支架为三轴云台支架,而所述拍摄装置12为三轴云台相机。所述三轴云台支架包括偏航轴组件22、与所述偏航轴组件22活动连接的横滚轴组件23、以及与所述横滚轴组件23活动连接的俯仰轴组件24,所述拍摄器搭载于所述俯仰轴组件24。所述偏航轴组件22带动拍摄装置12沿偏航方向转动。当然,在其他例子中,所述云台支架也可以为两轴云台、四轴云台等,具体可根据需要选择。
在一个实施例中,还设置有安装部,安装部设置于与所述横滚轴组件连接的连接臂的一端,而偏航轴组件可以设置于所述手柄中,所述偏航轴组件带动拍摄装置12一起沿偏航方向转动。
在一可选的实施方式中,如图6所示,所述手柄11设有用于与移动设备2(如手机)耦合连接的转接件26,所述转接件26与所述手柄11可拆卸连接。所述转接件26自所述手柄的侧部凸伸而出以用于连接所述移动设备2,当所述转接件26与所述移动设备2连接后,所述手持云台1与所述转接件26对接并用于被支撑于所述移动设备2的端部。
在手柄11设置用于与移动设备2连接的转接件26,进而将手柄11和移动设备2相互连接,手柄11可作为移动设备2的一个底座,用户可以通过握持移动设备2的另一端来一同把手持云台1拿起操作,连接方便快捷,产品美观性强。此外,手柄11通过转接件26与移动设备2耦合连接后,能够实现手持云台1与移动设备2之间的通信连接,拍摄装置12与移动设备2之间能够进行数据传输。
在一个实施例中,所述转接件26与所述手柄11可拆卸连接,即转接件26和手柄11之间可以实现机械方面的连接或拆除。进一步地,所述转接件26 设有电接触部,所述手柄11设有与所述电接触部配合的电接触配合部。
这样,当手持云台1不需要与移动设备2连接时,可以将转接件26从手柄11上拆除。当手持云台1需要与移动设备2连接时,再将转接件26装到手柄11上,完成转接件26和手柄11之间的机械连接,同时通过电接触部和电接触配合部的连接保证两者之间的电性连接,以实现拍摄装置12与移动设备2之间能够通过转接件26进行数据传输。
在一个实施例中,如图5所示,所述手柄11的侧部设有收容槽27,所述转接件26滑动卡接于所述收容槽27内。当转接件26装到收容槽27后,转接件26部分凸出于所述收容槽27,转接件26凸出收容槽27的部分用于与移动设备2连接。
在一个实施例中,参见图5所示,所当述转接件26自所述转接件26装入所述收容槽27时,所述转接部与所述收容槽27齐平,进而将转接件26收纳在手柄11的收容槽27内。
因此,当手持云台1需要和移动设备2连接时,可以将转接件26自所述转接部装入所述收容槽27内,使得转接件26凸出于所述收容槽27,以便移动设备2与手柄11相互连接
当移动设备2使用完毕后,或者需要将移动设备2拔下时,可以将转接件26从手柄11的收容槽27内取出,然后反向自所述转接件26装入所述收容槽27内,进而将转接件26收纳在手柄11内。转接件26与手柄11的收容槽27齐平当转接件26收纳在手柄11内后,可以保证手柄11的表面平整,而且将转接件26收纳在手柄11内更便于携带。
在一个实施例中,所述收容槽27是半开放式地开设在手柄11的一侧表面,这样更便于转接件26与收容槽27进行滑动卡接。当然,在其他例子中,转接件26也可以采用卡扣连接、插接等方式与手柄11的收容槽27可拆卸连接。
在一个实施例中,收容槽27设置于手柄11的侧面,在不使用转接功能时,通过盖板28卡接覆盖该收容槽27,这样便于用户操作,同时也不影响手柄的正面和侧面的整体外观。
在一个实施例中,所述电接触部与电接触配合部之间可以采用触点接触的方式实现电连接。例如,所述电接触部可以选择为伸缩探针,也可以选择为电插接口,还可以选择为电触点。当然,在其他例子中,所述电接触部与电接触配合部之间也可以直接采用面与面的接触方式实现电连接。
A1、一种视频处理方法,其特征在于,包括:
根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法;
使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理。
A2、根据A1所述的视频处理方法,其特征在于,所述优先级计算信息包括用于标识所述候选图像识别算法最近一次执行时间的执行时间信息;对应的,所述根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法包括:
根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序,以根据排序结果确定所述目标图像识别算法。
A3、根据A2所述的视频处理方法,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序。
A4、根据A3所述的视频处理方法,其特征在于,所述根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
A5、根据A2所述的视频处理方法,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法重要程度的权重信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序。
A6、根据A5所述的视频处理方法,其特征在于,所述权重信息包括权重系数;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息和 所述权重信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序。
A7、根据A6所述的视频处理方法,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序包括:
根据所述乘积值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
A8、根据A1所述的视频处理方法,其特征在于,所述目标图像识别算法的数量为1。
A9、根据A1所述的视频图像处理方法,其特征在于,所述使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理之后还包括:
更新所述候选图像识别算法对应的所述优先级计算信息,以及更新所述待处理图像帧的范围。
A10、根据A9所述的视频处理方法,其特征在于,所述待处理图像帧的数量为1,所述更新所述待处理图像帧的范围包括:
将所述待处理图像帧之后的一图像帧确定为新的所述待处理图像帧。
A11、一种视频处理设备,其特征在于,包括:存储器、处理器、视频采集器,所述视频采集器用于采集目标区域的待跟踪目标;所述存储器用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法;
使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理。
A12、根据A11所述视频处理设备,其特征在于,所述优先级计算信息包括用于标识所述候选图像识别算法最近一次执行时间的执行时间信息;对应的,所述根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法包括:
根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序,以根据排序结果确定所述目标图像识别算法。
A13、根据A12所述视频处理设备,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序。
A14、根据A13所述视频处理设备,其特征在于,所述根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
A15、根据A12所述视频处理设备,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法重要程度的权重信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序。
A16、根据A15所述视频处理设备,其特征在于,所述权重信息包括权重系数;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序包括:
根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序。
A17、根据A16所述视频处理设备,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述空闲时间间隔值与所述权重系数的乘积值,对多 个所述候选图像识别算法进行排序包括:
根据所述乘积值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
A18、根据A11所述视频处理设备,其特征在于,所述目标图像识别算法的数量为1。
A19、根据A11所述视频处理设备,其特征在于,所述使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理之后还包括:
更新所述候选图像识别算法对应的所述优先级计算信息,以及所述待处理图像帧的范围。
A20、根据A19所述视频处理设备,其特征在于,所述待处理图像帧的数量为1,所述更新所述待处理图像帧的范围包括:
将所述待处理图像帧之后的一图像帧确定为新的所述待处理图像帧。
A21、一种手持相机,其特征在于,包括根据A11-A20中任一项所述的视频处理设备,其特征在于,还包括:承载器,所述承载器与所述视频采集器固定连接,用于承载所述视频采集器的至少一部分。
A22、如A21所述的手持相机,其特征在于,所述承载器包括但不限于手持云台。
A23、如A22所述的手持相机,其特征在于,所述手持云台为手持三轴云台。
如A21所述的手持相机,其特征在于,所述视频采集器包括但不限于手持三轴云台用摄像头。
至此,已经对本主题的特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作可以按照不同的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序,以实现期望的结果。在某些实施方式中,多任务处理和并行处理可以是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流 程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或 实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述, 例如程序模块。一般地,程序模块包括执行特定事务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行事务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (10)

  1. 一种视频处理方法,其特征在于,包括:
    根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法;
    使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理。
  2. 根据权利要求1所述的视频处理方法,其特征在于,所述优先级计算信息包括用于标识所述候选图像识别算法最近一次执行时间的执行时间信息;对应的,所述根据候选图像识别算法对应的优先级计算信息,从多个所述候选图像识别算法中确定目标图像识别算法包括:
    根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序,以根据排序结果确定所述目标图像识别算法。
  3. 根据权利要求2所述的视频处理方法,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
    根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序。
  4. 根据权利要求3所述的视频处理方法,其特征在于,所述根据所述候选图像识别算法对应的所述执行时间信息和所述间隔时间信息,对多个所述候选图像识别算法进行排序包括:
    根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
    根据所述空闲时间间隔值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
  5. 根据权利要求2所述的视频处理方法,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法重要程度的权重信息;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息,对多个所述候选图像识别算法进行排序包括:
    根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对 多个所述候选图像识别算法进行排序。
  6. 根据权利要求5所述的视频处理方法,其特征在于,所述权重信息包括权重系数;对应的,所述根据所述候选图像识别算法对应的所述执行时间信息和所述权重信息,对多个所述候选图像识别算法进行排序包括:
    根据所述候选图像识别算法对应的所述执行时间信息,获得当前时间与所述候选图像识别算法最近一次执行时间之间的空闲时间间隔值;
    根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序。
  7. 根据权利要求6所述的视频处理方法,其特征在于,所述优先级计算信息还包括用于标识所述候选图像识别算法允许的最长操作时间间隔值的间隔时间信息;对应的,所述根据所述空闲时间间隔值与所述权重系数的乘积值,对多个所述候选图像识别算法进行排序包括:
    根据所述乘积值与所述最长操作时间间隔值的商值,对多个所述候选图像识别算法进行排序。
  8. 根据权利要求1所述的视频处理方法,其特征在于,所述目标图像识别算法的数量为1。
  9. 根据权利要求1所述的视频图像处理方法,其特征在于,所述使用所述目标图像识别算法对视频中的待处理图像帧进行图像处理之后还包括:
    更新所述候选图像识别算法对应的所述优先级计算信息,以及更新所述待处理图像帧的范围。
  10. 根据权利要求9所述的视频处理方法,其特征在于,所述待处理图像帧的数量为1,所述更新所述待处理图像帧的范围包括:
    将所述待处理图像帧之后的一图像帧确定为新的所述待处理图像帧。
PCT/CN2020/099833 2020-04-15 2020-07-02 一种视频处理方法、设备及手持相机 WO2021208256A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010296289.1A CN112052713B (zh) 2020-04-15 2020-04-15 一种视频处理方法、设备及手持相机
CN202010296289.1 2020-04-15

Publications (1)

Publication Number Publication Date
WO2021208256A1 true WO2021208256A1 (zh) 2021-10-21

Family

ID=73609668

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099833 WO2021208256A1 (zh) 2020-04-15 2020-07-02 一种视频处理方法、设备及手持相机

Country Status (2)

Country Link
CN (1) CN112052713B (zh)
WO (1) WO2021208256A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347459B (zh) * 2021-06-11 2023-08-11 杭州星犀科技有限公司 基于Android系统的自主音频源切换方法、装置和计算设备
CN114219883A (zh) * 2021-12-10 2022-03-22 北京字跳网络技术有限公司 视频特效处理方法、装置、电子设备及程序产品
CN115909186B (zh) * 2022-09-30 2024-05-14 北京瑞莱智慧科技有限公司 图像信息识别方法、装置、计算机设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779412A (zh) * 2011-05-13 2012-11-14 深圳市新创中天信息科技发展有限公司 集成式一体化视频交通信息检测方法和系统
CN107516097A (zh) * 2017-08-10 2017-12-26 青岛海信电器股份有限公司 台标识别方法和装置
CN107861684A (zh) * 2017-11-23 2018-03-30 广州视睿电子科技有限公司 书写识别方法、装置、存储介质及计算机设备
US20180322353A1 (en) * 2017-05-08 2018-11-08 PlantSnap, Inc. Systems and methods for electronically identifying plant species
CN108830198A (zh) * 2018-05-31 2018-11-16 上海玮舟微电子科技有限公司 视频格式的识别方法、装置、设备及存储介质
CN108875519A (zh) * 2017-12-19 2018-11-23 北京旷视科技有限公司 对象检测方法、装置和系统及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779412A (zh) * 2011-05-13 2012-11-14 深圳市新创中天信息科技发展有限公司 集成式一体化视频交通信息检测方法和系统
US20180322353A1 (en) * 2017-05-08 2018-11-08 PlantSnap, Inc. Systems and methods for electronically identifying plant species
CN107516097A (zh) * 2017-08-10 2017-12-26 青岛海信电器股份有限公司 台标识别方法和装置
CN107861684A (zh) * 2017-11-23 2018-03-30 广州视睿电子科技有限公司 书写识别方法、装置、存储介质及计算机设备
CN108875519A (zh) * 2017-12-19 2018-11-23 北京旷视科技有限公司 对象检测方法、装置和系统及存储介质
CN108830198A (zh) * 2018-05-31 2018-11-16 上海玮舟微电子科技有限公司 视频格式的识别方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112052713B (zh) 2022-01-11
CN112052713A (zh) 2020-12-08

Similar Documents

Publication Publication Date Title
WO2021208256A1 (zh) 一种视频处理方法、设备及手持相机
WO2021208253A1 (zh) 一种跟踪对象确定方法、设备和手持相机
US8605158B2 (en) Image pickup control apparatus, image pickup control method and computer readable medium for changing an image pickup mode
CN102457661B (zh) 照相机
CN105704369B (zh) 一种信息处理方法及装置、电子设备
CN104902185B (zh) 拍摄方法及装置
WO2021208249A1 (zh) 图像处理方法、设备及手持相机
CN102891958A (zh) 一种具有姿势指导功能的数码相机
CN103874970A (zh) 电子设备及程序
EP2464095A1 (en) Electronic device, control method, program, and image capturing system
WO2021208251A1 (zh) 人脸跟踪方法及人脸跟踪设备
US10979627B2 (en) Mobile terminal
WO2021208255A1 (zh) 一种视频片段标记方法、设备及手持相机
CN108632543A (zh) 图像显示方法、装置、存储介质及电子设备
CN110177200A (zh) 摄像头模组、电子设备以及影像拍摄方法
KR20220102401A (ko) 전자 장치 및 그의 동작 방법
WO2021208252A1 (zh) 一种跟踪目标确定方法、装置和手持相机
WO2021208254A1 (zh) 一种跟踪目标的找回方法、设备以及手持相机
WO2021208257A1 (zh) 跟踪状态确定方法、设备及手持相机
WO2021208258A1 (zh) 基于跟踪目标的搜索方法、设备及其手持相机
CN207491129U (zh) 一种智能交互投影仪
CN104104870A (zh) 拍摄控制方法、拍摄控制装置及拍摄设备
WO2021208261A1 (zh) 一种跟踪目标的找回方法、设备及手持相机
WO2021208260A1 (zh) 目标对象的跟踪框显示方法、设备及手持相机
CN101848325A (zh) 摄影装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20931299

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20931299

Country of ref document: EP

Kind code of ref document: A1