CN114598834A - Video processing method and device, electronic equipment and readable storage medium - Google Patents

Video processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114598834A
CN114598834A CN202210500792.3A CN202210500792A CN114598834A CN 114598834 A CN114598834 A CN 114598834A CN 202210500792 A CN202210500792 A CN 202210500792A CN 114598834 A CN114598834 A CN 114598834A
Authority
CN
China
Prior art keywords
parameters
frame rate
target
video
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210500792.3A
Other languages
Chinese (zh)
Inventor
潘三明
闫亚旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Tower Co Ltd
Original Assignee
China Tower Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Tower Co Ltd filed Critical China Tower Co Ltd
Priority to CN202210500792.3A priority Critical patent/CN114598834A/en
Publication of CN114598834A publication Critical patent/CN114598834A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a video processing method, a video processing device, electronic equipment and a readable storage medium, and relates to the technical field of Internet of things equipment, wherein the method comprises the following steps: receiving N algorithm models, N frame rate parameters and N format parameters, wherein the N algorithm models, the N frame rate parameters and the N format parameters are sent by a first server, the N frame rate parameters are respectively N parameters corresponding to the N algorithm models, and the N format parameters are respectively N parameters corresponding to the N frame rate parameters; acquiring a video; carrying out format conversion processing on the video by using the target format parameter to obtain converted video data; performing frame extraction processing on the converted video data according to the target frame rate parameter to obtain intermediate picture data; calculating the intermediate picture data through a target algorithm model to obtain a calculation processing result; and sending the calculation processing result to the first server. The invention can adapt to different business requirements by carrying out multi-format processing and algorithm processing on the video of one camera, thereby improving the utilization efficiency of the camera and reducing the occupation of network resources.

Description

Video processing method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of Internet of things equipment, in particular to a video processing method and device, electronic equipment and a readable storage medium.
Background
With the continuous development of communication technology, the long-time monitoring of a specific area can be efficiently realized through intelligent identification, the use of manpower is reduced, and the method is widely applied to the fields of construction sites, traffic safety and the like. In the prior art, a single service usually needs to use a plurality of cameras, and the independent layout of different services causes a plurality of cameras to appear in the same area. Furthermore, the resources of the devices such as the camera are invested too much, and the network transmission and storage resources are occupied greatly because the videos shot by the cameras are uploaded to the server.
Therefore, the prior art has the problem that equipment and network resources are wasted.
Disclosure of Invention
The embodiment of the invention provides a video processing method, a video processing device, electronic equipment and a readable storage medium, and aims to solve the problem that equipment and network resources are wasted in the prior art.
In order to solve the problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video processing method, which is executed by a gateway, and includes:
receiving N algorithm models, N frame rate parameters and N format parameters, wherein the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
acquiring a video, wherein the video is recorded by a camera;
performing format conversion processing on the video by using a target format parameter to obtain converted video data, wherein the target format parameter is at least one format parameter in the N format parameters;
performing frame extraction processing on the converted video data according to a target frame rate parameter to obtain intermediate picture data, wherein the target frame rate parameter is at least one frame rate parameter corresponding to the target format parameter in the N frame rate parameters;
calculating the intermediate image data through a target algorithm model to obtain a calculation processing result, wherein the target algorithm model is at least one algorithm model corresponding to the target frame rate parameter in the N algorithm models;
and sending the calculation processing result to the first server.
In a second aspect, an embodiment of the present invention further provides a video processing method, executed by a first server, including:
acquiring configuration information, wherein the configuration information comprises N algorithm models and M second servers, the N algorithm models are N algorithm models respectively corresponding to the M second servers, and M is a natural number greater than 1;
sending the N algorithm models, N frame rate parameters and N format parameters to a gateway, wherein the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
receiving a calculation processing result, wherein the calculation processing result is a processing result sent to the first server by the gateway in response to the N algorithm models;
and sending the calculation processing result to a target server, wherein the target server is at least one server in the N second servers.
In a third aspect, an embodiment of the present invention further provides a video processing apparatus, including:
a receiving module, configured to receive N algorithm models, N frame rate parameters and N format parameters sent by a first server, where the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
the acquisition module is used for acquiring a video, and the video is recorded by a camera;
the first processing module is used for performing format conversion processing on the video by using a target format parameter to obtain converted video data, wherein the target format parameter is at least one format parameter in the N format parameters;
a second processing module, configured to perform frame extraction processing on the converted video data according to a target frame rate parameter to obtain intermediate picture data, where the target frame rate parameter is at least one frame rate parameter corresponding to the target format parameter in the N frame rate parameters;
a third processing module, configured to perform calculation processing on the intermediate image data through a target algorithm model to obtain a calculation processing result, where the target algorithm model is at least one algorithm model corresponding to the target frame rate parameter in the N algorithm models;
and the sending module is used for sending the calculation processing result to the first server.
In a fourth aspect, an embodiment of the present invention further provides a video processing apparatus, including:
the acquisition module is used for acquiring configuration information, wherein the configuration information comprises N algorithm models and M second servers, the N algorithm models are N algorithm models respectively corresponding to the M second servers, and M is a natural number greater than 1;
a first sending module, configured to send the N algorithm models, N frame rate parameters, and N format parameters to a gateway, where the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
a receiving module, configured to receive a calculation processing result, where the calculation processing result is a processing result sent to the first server by the gateway in response to the N algorithm models;
and the second sending module is used for sending the calculation processing result to a target server, wherein the target server is at least one server in the M second servers.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps in the video processing method according to the first aspect; or, implementing the steps in the video processing method according to the second aspect.
In a sixth aspect, an embodiment of the present invention further provides a readable storage medium, which is used for storing a program, and when the program is executed by a processor, the program implements the steps in the video processing method according to the first aspect; or, implementing the steps in the video processing method according to the second aspect.
In the embodiment of the invention, different format processing and algorithm processing are carried out on the video processed by one camera, so that different service requirements can be adapted, and the occupation of equipment and network resources is reduced.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a video processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 4 is a second schematic structural diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention, which is executed by a gateway, and as shown in fig. 1, the video processing method includes:
step 101, receiving N algorithm models, N frame rate parameters and N format parameters sent by a first server, where the N frame rate parameters are N parameters corresponding to the N algorithm models, the N format parameters are N parameters corresponding to the N frame rate parameters, and N is a natural number greater than 1.
The algorithm model is used for identifying whether a specific object exists in the video, and the algorithm models are different according to different service requirements, and the frame rates and the format sizes corresponding to the different algorithm models are also different. For example, the first algorithm model is used to identify whether construction equipment exists in a video, and the required frame rate is 10fps (Frames Per Second), and the required format size is 800 × 600 pixel size; the second algorithm model is used to identify whether there is a river in the video, and the required frame rate is 15fps, and the required format size is 1024 × 786 pixel size. The algorithm model, the frame rate parameter and the format parameter are all designed according to specific service requirements, and the set algorithm model, the frame rate parameter and the format parameter need to be acquired from the first server before video processing.
And 102, acquiring a video, wherein the video is recorded by one camera.
For one position, only one camera is adopted to record the video, and the video is processed through N algorithm models so as to meet different service requirements, reduce the use of the camera and simultaneously reduce the occupation of network resources.
And 103, performing format conversion processing on the video by using the target format parameter to obtain converted video data, wherein the target format parameter is at least one format parameter in the N format parameters.
Because different services need different video formats, the video needs to be processed by using format parameters corresponding to the algorithm model, and the video conforming to the format of the algorithm model is obtained. For example, the format size required by the first algorithm model is 800 × 600 pixels, the video format size required by the second algorithm model is 1024 × 786 pixels, the original video is respectively subjected to format conversion through a format conversion model carried by the gateway, and is converted into a first video with the size of 800 × 600 pixels and a second video with the size of 1024 × 786 pixels, and then the first video is processed by the first algorithm model, and the second video is processed by the second algorithm model.
And step 104, performing frame extraction processing on the converted video data according to the target frame rate parameter to obtain intermediate picture data, wherein the target frame rate parameter is at least one frame rate parameter corresponding to the target format parameter in the N frame rate parameters.
For N algorithm models, the same condition may exist for their corresponding format parameters or corresponding frame rate parameters. For a plurality of algorithm models with the same format parameters, after format conversion is performed once, different frame extraction processing can be performed on the converted video, so as to obtain intermediate picture data.
For example, the format parameter corresponding to the first algorithm model is 800 × 600 pixels, and the frame rate parameter is 15 fps; the format parameter corresponding to the second algorithm model is 800 multiplied by 600 pixels, and the frame rate parameter is 30 fps; the format parameter corresponding to the third algorithm model is 1024 × 786 pixels, and the frame rate parameter is 15 fps; the format parameter corresponding to the fourth algorithm model is 1024 × 786 pixels, and the frame rate parameter is 30 fps. For the first algorithm model and the second algorithm model, the format parameters are the same, and the processed intermediate picture data can be obtained by performing frame extraction processing on a first video with the size of 800 × 600 pixels after format conversion; similarly, for the third algorithm model and the fourth algorithm model, the format parameters are the same, and the first video with the size of 1024 × 786 pixels after format conversion can be subjected to frame extraction processing to obtain two kinds of intermediate picture data with a frame rate parameter of 15fps and a frame rate parameter of 30 fps.
And 105, calculating the intermediate image data through a target algorithm model to obtain a calculation processing result, wherein the target algorithm model is at least one algorithm model corresponding to the target frame rate parameter in the N algorithm models.
And after the intermediate picture data which accord with the target algorithm model are obtained, processing the intermediate picture data by using the target algorithm model to obtain a calculation processing result. And calculating a processing result to be a group of picture data, and processing the intermediate picture data through the target algorithm model to obtain a group of picture data containing the identification result, wherein the identification result comprises two results of identifying the object and not identifying the object.
And step 106, sending the calculation processing result to the first server.
In the embodiment, different format processing and algorithm processing are performed on the video processed by one camera, so that different service requirements can be adapted, and the occupation of equipment and network resources is reduced.
In one embodiment, the frame extraction processing on the converted video data according to the target frame rate parameter to obtain intermediate picture data includes:
performing frame extraction processing on the converted video data through a first frame rate parameter to obtain frame extraction picture data, wherein the first frame rate parameter is the maximum frame rate parameter in the target frame rate parameters;
naming the frame-extracted picture data to obtain first coded data;
and sorting the frame extraction image data through the first encoding data and a second frame rate to obtain intermediate image data, wherein the second frame rate is at least one frame rate parameter corresponding to the target algorithm model.
In this embodiment, the frame extraction processing is performed on the converted video data, and since the same format parameter may correspond to a plurality of frame rate parameters, naming distinction needs to be performed on the data after frame extraction. And performing frame extraction processing on the same format parameter, performing frame extraction according to the maximum frame rate parameter, wherein the frame extraction image data after frame extraction comprises results obtained by frame extraction at other frame rates, and sorting the frame extraction image data to obtain image data required by other frame rates.
For example, if the maximum frame rate parameter is 60fps, the frame rate parameter of 60fps is used to perform frame extraction, so as to obtain frame extraction picture data. And naming the extracted frame picture data to obtain first coded data, wherein the first coded data comprises at least one item of camera number, format code, timestamp, frame number and azimuth information. And sorting the frame extraction picture data according to the first encoding data to obtain intermediate picture data with other frame rate parameters such as 30fps, 15fps and the like. In one embodiment, performing format conversion processing on the video by using the target format parameter to obtain converted video data includes:
decoding the video to obtain a decoded video;
copying the decoded video to obtain a plurality of copied videos, wherein the number of the copied videos is the same as that of the target format parameters;
and processing the target copy video by using the target format parameter to obtain converted video data, wherein the target copy video is one of a plurality of copy videos.
In this embodiment, after the video shot by the camera is acquired, the video is decoded to obtain a decoded video, and then different format conversion is performed on the decoded video. And the intermediate picture data is obtained by frame extraction of the converted video data, the shot video needs to be copied firstly, and the copied number is the same as the number of the target format parameters, so that the copied video is obtained. And then converting the format of the copied video to obtain converted video data meeting the format parameters, and effectively adapting to the N algorithm models.
In one embodiment, the calculating the processing result includes fusing the picture data and the second encoded data, and calculating the intermediate picture data through the target algorithm model to obtain the calculating the processing result, including:
processing the intermediate picture data by using a target algorithm model to obtain an analysis result;
fusing the analysis result and the intermediate picture data to obtain fused picture data;
naming the fused picture data to obtain second coded data;
sending the calculation processing result to the first server, including:
and sending the second coded data and the fused picture data to the first server.
In this embodiment, the algorithm model calculates and processes the intermediate image data to obtain an analysis result, and performs fusion processing on the analysis result and the intermediate image data to obtain fused image data. The fused image data comprises at least one item of identification tags, identification object boundaries, classification parameters and user-defined parameters, and meets business requirements. And naming the fused picture data to obtain second coded data, wherein the second coded data comprises at least one of a camera number, a format code, a timestamp, a frame number and azimuth information, and the server can send the fused picture data to a specific platform according to the second coded data to realize that the video of one camera is adapted to a plurality of service requirements.
Referring to fig. 2, fig. 2 is a second flowchart of a video processing method according to an embodiment of the present invention, executed by a first server, as shown in fig. 2, the video processing method includes:
step 201, obtaining configuration information, where the configuration information includes N algorithm models and M second servers, the N algorithm models are N algorithm models respectively corresponding to the M second servers, and M is a natural number greater than 1.
The first server is a server communicated with the gateway, the second server is a service server, different algorithm models are configured according to different service requirements, the algorithm models and the second server can be in a one-to-one correspondence relationship, one second server can correspond to a plurality of algorithm models, or one algorithm model corresponds to a plurality of second servers, and the algorithm models can be designed according to service requirements.
Step 202, sending N algorithm models, N frame rate parameters and N format parameters to the gateway, where the N frame rate parameters are N parameters corresponding to the N algorithm models, the N format parameters are N parameters corresponding to the N frame rate parameters, and N is a natural number greater than 1.
And 203, receiving a calculation processing result, wherein the calculation processing result is a processing result sent to the first server by the gateway responding to the N algorithm models.
And step 204, sending a calculation processing result to a target server, wherein the target server is at least one server in the M second servers.
In this embodiment, the first server sends the calculation processing result to the M second servers, thereby implementing adaptation to multiple service requirements and reducing the use of the camera and network resources.
In one embodiment, the calculation processing result includes fused picture data and third encoded data, where the third encoded data is obtained by naming the fused picture data by the gateway, and sending the calculation processing result to the target server includes:
and sending the fused picture data to the target server according to the third coded data.
In this embodiment, the third encoded data includes at least one of a camera number, a format code, a timestamp, a frame number, and azimuth information, and the server may send the fused picture data to the third server according to the third encoded data.
The first coded data, the second coded data or the third coded data are named according to the following rules:
the encoded data includes at least one of a camera number, a format code, a time stamp, a frame number, and orientation information, the format being designated as CN _ P _ PF _ YYYYMMDDHHMMSS _ FN _ PPPSTTZZ. Wherein:
CN represents a camera number;
p represents whether the picture is processed by an algorithm model;
PF represents fixed coding of a picture format, specifically as follows:
Figure DEST_PATH_IMAGE001
YYYYMMDDHHMMSS represents time, including year, month, day, hour, minute and second, and zero padding is performed when the number of bits is insufficient;
FN represents image frame number, and when less than 2 bits, the former bit is complemented by '0';
PPPSTTZZ represents orientation information, as follows:
Figure DEST_PATH_IMAGE002
and obtaining the first coded data, the second coded data and the third coded data through the naming rule.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention, as shown in fig. 3, the video processing apparatus 300 includes:
a receiving module 301, configured to receive N algorithm models, N frame rate parameters and N format parameters sent by a first server, where the N frame rate parameters are N parameters corresponding to the N algorithm models, the N format parameters are N parameters corresponding to the N frame rate parameters, and N is a natural number greater than 1;
an obtaining module 302, configured to obtain a video, where the video is recorded by a camera;
a first processing module 303, configured to perform format conversion processing on the video by using the target format parameter to obtain converted video data, where the target format parameter is at least one format parameter of the N format parameters;
a second processing module 304, configured to perform frame extraction processing on the converted video data according to a target frame rate parameter to obtain intermediate picture data, where the target frame rate parameter is at least one frame rate parameter corresponding to a target format parameter in the N frame rate parameters;
a third processing module 305, configured to perform calculation processing on the intermediate image data through a target algorithm model to obtain a calculation processing result, where the target algorithm model is at least one algorithm model corresponding to the target frame rate parameter in the N algorithm models;
a sending module 306, configured to send the calculation processing result to the first server.
In one embodiment, the second processing module 304 includes:
the first processing unit is used for performing frame extraction processing on the converted video data through a first frame rate parameter to obtain frame extraction picture data, wherein the first frame rate parameter is the largest frame rate parameter in the target frame rate parameters;
the second processing unit is used for naming the frame extraction picture data to obtain first coded data;
and the third processing unit is used for sorting the frame extraction image data through the first encoding data and a second frame rate to obtain intermediate image data, wherein the second frame rate is at least one frame rate parameter corresponding to the target algorithm model.
In one embodiment, the first processing module 303 includes:
the fourth processing unit is used for decoding the video to obtain a decoded video;
the fifth processing unit is used for copying the decoded video to obtain a plurality of copied videos, wherein the number of the copied videos is the same as that of the target format parameters;
and the sixth processing unit is used for processing the target copy video by using the target format parameter to obtain converted video data, wherein the target copy video is one of the plurality of copy videos.
In one embodiment, the calculation processing result includes fused picture data and second encoded data, and the third processing module 305 includes:
the seventh processing unit is used for processing the intermediate picture data by using the target algorithm model to obtain an analysis result;
the eighth processing unit is used for carrying out fusion processing on the analysis result and the intermediate picture data to obtain fused picture data;
the ninth processing unit is used for naming the fused picture data to obtain second coded data;
the sending module 306 includes:
and the first sending unit is used for sending the second coded data and the fusion picture data to the first server.
The video processing apparatus 300 is capable of implementing each process of each embodiment of the video processing method applied to the gateway, and has one-to-one correspondence technical features and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Referring to fig. 4, fig. 4 is a second schematic structural diagram of a video processing apparatus according to an embodiment of the present invention, as shown in fig. 4, the video processing apparatus 400 includes:
an obtaining module 401, configured to obtain configuration information, where the configuration information includes N algorithm models and M second servers, the N algorithm models are N algorithm models respectively corresponding to the M second servers, and M is a natural number greater than 1;
a first sending module 402, configured to send N algorithm models, N frame rate parameters, and N format parameters to a gateway, where the N frame rate parameters are N parameters corresponding to the N algorithm models, the N format parameters are N parameters corresponding to the N frame rate parameters, and N is a natural number greater than 1;
a receiving module 403, configured to receive a calculation processing result, where the calculation processing result is a processing result sent by the gateway to the first server in response to the N algorithm models;
a second sending module 404, configured to send the calculation processing result to a target server, where the target server is at least one server in the M second servers.
In one embodiment, the calculation processing result includes fused picture data and third encoded data, where the third encoded data is obtained by naming the fused picture data by the gateway, and the second sending module 404 includes:
and the sending unit is used for sending the fused picture data to the target server according to the third coded data.
The video processing apparatus 400 is capable of implementing each process of each embodiment of the video processing method applied to the first server, and has one-to-one correspondence between technical features, and can achieve the same technical effect, and for avoiding repetition, details are not repeated here.
An embodiment of the present invention further provides an electronic device, referring to fig. 5, where fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the electronic device includes a memory 501, a processor 502, and a program or an instruction stored in the memory 501 and running on the memory 501, and when the program or the instruction is executed by the processor 502, any step in the method embodiment corresponding to fig. 1 or fig. 2 may be implemented and the same beneficial effect may be achieved, which is not described herein again.
The processor 502 may be a CPU, ASIC, FPGA, GPU, NPU, or CPLD, among others.
Those skilled in the art will appreciate that all or part of the steps of the method according to the above embodiments may be implemented by hardware associated with program instructions, and the program may be stored in a readable medium.
An embodiment of the present invention further provides a readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, any step in the method embodiment corresponding to fig. 1 or fig. 2 may be implemented, and the same technical effect may be achieved, and is not described herein again to avoid repetition. The storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The terms "first," "second," and the like in the embodiments of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Further, the use of "and/or" in this application means that at least one of the connected objects, e.g., a and/or B and/or C, means that 7 cases are included where a alone, B alone, C alone, and both a and B are present, B and C are present, a and C are present, and A, B and C are present.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present application may be substantially or partially embodied in the form of a software product, which is stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g. a mobile phone, a computer, a server, an air conditioner, or a second terminal device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video processing method performed by a gateway, comprising:
receiving N algorithm models, N frame rate parameters and N format parameters, wherein the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
acquiring a video, wherein the video is recorded by a camera;
performing format conversion processing on the video by using a target format parameter to obtain converted video data, wherein the target format parameter is at least one format parameter in the N format parameters;
performing frame extraction processing on the converted video data according to a target frame rate parameter to obtain intermediate picture data, wherein the target frame rate parameter is at least one frame rate parameter corresponding to the target format parameter in the N frame rate parameters;
calculating the intermediate image data through a target algorithm model to obtain a calculation processing result, wherein the target algorithm model is at least one algorithm model corresponding to the target frame rate parameter in the N algorithm models;
and sending the calculation processing result to the first server.
2. The method of claim 1, wherein the performing frame extraction on the converted video data according to the target frame rate parameter to obtain intermediate picture data comprises:
performing frame extraction processing on the converted video data through a first frame rate parameter to obtain frame extraction picture data, wherein the first frame rate parameter is the largest frame rate parameter in the target frame rate parameters;
naming the frame-extracted picture data to obtain first coded data;
and sorting the frame extraction picture data through the first encoding data and a second frame rate to obtain the intermediate picture data, wherein the second frame rate is at least one frame rate parameter corresponding to the target algorithm model.
3. The method according to claim 2, wherein the performing format conversion processing on the video by using the target format parameter to obtain converted video data comprises:
decoding the video to obtain a decoded video;
copying the decoded video to obtain a plurality of copied videos, wherein the number of the copied videos is the same as that of the target format parameters;
and processing the target copy video by using the target format parameter to obtain converted video data, wherein the target copy video is one of the plurality of copy videos.
4. The method according to claim 2, wherein the calculation processing result includes fusion of picture data and second encoded data, and the performing calculation processing on the intermediate picture data through the target algorithm model to obtain the calculation processing result includes:
processing the intermediate picture data by using the target algorithm model to obtain an analysis result;
fusing the analysis result and the intermediate picture data to obtain fused picture data;
naming the fused picture data to obtain second coded data;
the sending the calculation processing result to the first server includes:
and sending the second coded data and the fused picture data to the first server.
5. A video processing method performed by a first server, comprising:
acquiring configuration information, wherein the configuration information comprises N algorithm models and M second servers, the N algorithm models are N algorithm models respectively corresponding to the M second servers, and M is a natural number greater than 1;
sending the N algorithm models, N frame rate parameters and N format parameters to a gateway, wherein the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
receiving a calculation processing result, wherein the calculation processing result is a processing result sent to the first server by the gateway in response to the N algorithm models;
and sending the calculation processing result to a target server, wherein the target server is at least one server in the M second servers.
6. The method according to claim 5, wherein the calculation processing result includes fused picture data and third encoded data, the third encoded data is obtained by naming the fused picture data by the gateway, and the sending the calculation processing result to the target server includes:
and sending the fused picture data to the target server according to the third coded data.
7. A video processing apparatus, comprising:
a receiving module, configured to receive N algorithm models, N frame rate parameters and N format parameters sent by a first server, where the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
the acquisition module is used for acquiring a video, and the video is recorded by a camera;
the first processing module is used for performing format conversion processing on the video by using a target format parameter to obtain converted video data, wherein the target format parameter is at least one format parameter in the N format parameters;
a second processing module, configured to perform frame extraction processing on the converted video data according to a target frame rate parameter to obtain intermediate picture data, where the target frame rate parameter is at least one frame rate parameter corresponding to the target format parameter in the N frame rate parameters;
a third processing module, configured to perform calculation processing on the intermediate image data through a target algorithm model to obtain a calculation processing result, where the target algorithm model is at least one algorithm model corresponding to the target frame rate parameter in the N algorithm models;
and the sending module is used for sending the calculation processing result to the first server.
8. A video processing apparatus, comprising:
the acquisition module is used for acquiring configuration information, wherein the configuration information comprises N algorithm models and N second servers, and the N algorithm models are N algorithm models respectively corresponding to the N second servers;
a first sending module, configured to send the N algorithm models, N frame rate parameters, and N format parameters to a gateway, where the N frame rate parameters are N parameters respectively corresponding to the N algorithm models, the N format parameters are N parameters respectively corresponding to the N frame rate parameters, and N is a natural number greater than 1;
the receiving module is used for receiving a calculation processing result, wherein the calculation processing result is a processing result sent to the first server by the gateway in response to the N algorithm models;
and the second sending module is used for sending the calculation processing result to a target server, wherein the target server is at least one server in the N second servers.
9. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps in the video processing method according to any one of claims 1 to 4; or implementing the steps in the video processing method according to any of claims 5 to 6.
10. A readable storage medium storing a program, wherein the program, when executed by a processor, implements the steps in the video processing method according to any one of claims 1 to 4; or implementing the steps in the video processing method according to any of claims 5 to 6.
CN202210500792.3A 2022-05-10 2022-05-10 Video processing method and device, electronic equipment and readable storage medium Pending CN114598834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210500792.3A CN114598834A (en) 2022-05-10 2022-05-10 Video processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210500792.3A CN114598834A (en) 2022-05-10 2022-05-10 Video processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114598834A true CN114598834A (en) 2022-06-07

Family

ID=81812753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210500792.3A Pending CN114598834A (en) 2022-05-10 2022-05-10 Video processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114598834A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514970A (en) * 2022-10-28 2022-12-23 重庆紫光华山智安科技有限公司 Image frame pushing method and system, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004187170A (en) * 2002-12-05 2004-07-02 Ntt Communications Kk Video conference system
CN109618225A (en) * 2018-12-25 2019-04-12 百度在线网络技术(北京)有限公司 Video takes out frame method, device, equipment and medium
CN112188285A (en) * 2020-09-28 2021-01-05 北京达佳互联信息技术有限公司 Video transcoding method, device, system and storage medium
CN113824913A (en) * 2021-08-12 2021-12-21 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN114363579A (en) * 2022-01-21 2022-04-15 中国铁塔股份有限公司 Monitoring video sharing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004187170A (en) * 2002-12-05 2004-07-02 Ntt Communications Kk Video conference system
CN109618225A (en) * 2018-12-25 2019-04-12 百度在线网络技术(北京)有限公司 Video takes out frame method, device, equipment and medium
CN112188285A (en) * 2020-09-28 2021-01-05 北京达佳互联信息技术有限公司 Video transcoding method, device, system and storage medium
CN113824913A (en) * 2021-08-12 2021-12-21 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN114363579A (en) * 2022-01-21 2022-04-15 中国铁塔股份有限公司 Monitoring video sharing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514970A (en) * 2022-10-28 2022-12-23 重庆紫光华山智安科技有限公司 Image frame pushing method and system, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN104137146A (en) Method and system for video coding with noise filtering of foreground object segmentation
CN111818295B (en) Image acquisition method and device
WO2021121264A1 (en) Snapshot picture transmission method, apparatus and system, and camera and storage device
Huamán et al. Authentication and integrity of smartphone videos through multimedia container structure analysis
CN114598834A (en) Video processing method and device, electronic equipment and readable storage medium
CN107979766B (en) Content streaming system and method
CN113628116A (en) Training method and device for image processing network, computer equipment and storage medium
CN111263113A (en) Data packet sending method and device and data packet processing method and device
CN114746870A (en) High level syntax for priority signaling in neural network compression
CN110570614B (en) Video monitoring system and intelligent camera
CN114363579B (en) Method and device for sharing monitoring video and electronic equipment
CN112560552A (en) Video classification method and device
CN115209179A (en) Video data processing method and device
CN112533029B (en) Video time-sharing transmission method, camera device, system and storage medium
CN110798656A (en) Method, device, medium and equipment for processing monitoring video file
CN109784226B (en) Face snapshot method and related device
CN106534137B (en) Media stream transmission method and device
CN110049037B (en) Network video data acquisition method based on data link layer
CN117336524A (en) Video data processing method, device, equipment and storage medium
CN113038254B (en) Video playing method, device and storage medium
CN102694985A (en) Information superposition method, information extraction method, apparatus and system of video images
CN109246434B (en) Video encoding method, video decoding method and electronic equipment
CN116366873A (en) Video fusion processing method, device and storage medium
CN106776794A (en) A kind of method and system for processing mass data
CN117692668A (en) Live broadcast examination transcoding method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220607