CN112463250A - Video processing method and computing device - Google Patents

Video processing method and computing device Download PDF

Info

Publication number
CN112463250A
CN112463250A CN202011399513.6A CN202011399513A CN112463250A CN 112463250 A CN112463250 A CN 112463250A CN 202011399513 A CN202011399513 A CN 202011399513A CN 112463250 A CN112463250 A CN 112463250A
Authority
CN
China
Prior art keywords
function
standard
event
video processing
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011399513.6A
Other languages
Chinese (zh)
Inventor
李田迎
莫松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Cheerbright Technologies Co Ltd
Original Assignee
Beijing Cheerbright Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Cheerbright Technologies Co Ltd filed Critical Beijing Cheerbright Technologies Co Ltd
Priority to CN202011399513.6A priority Critical patent/CN112463250A/en
Publication of CN112463250A publication Critical patent/CN112463250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a video processing method, which is suitable for being executed in a computing device, wherein one or more video processing function packages are stored in the computing device, each video processing function package comprises a standard event and a data binding relation of a standard function, and an initialization parameter and a function parameter of the standard function, and is used for realizing a specific video processing effect, and the method comprises the following steps: acquiring a corresponding video processing function package according to a preset video processing requirement; analyzing a standard event, a standard function and a corresponding initialization parameter from a configuration file; executing the analyzed standard event on each frame of video image, and judging whether an expected execution result is obtained or not; if so, extracting a function parameter value corresponding to the standard function from the current video image; and executing the corresponding standard function on the current video image based on the initialization parameter and the function parameter value so as to realize the corresponding video processing effect. The invention also discloses a computing device for executing the method.

Description

Video processing method and computing device
Technical Field
The present invention relates to the field of video technologies, and in particular, to a video processing method and a computing device.
Background
Video is a commonly used presentation form of information content and is now taking an increasingly important position in life. With the popularity and development of DSL (specific domain language) technology, the application of mature cross-platform schemes such as RN and Flutter brings simpler and more flexible ways to mobile development, and has also become a development trend of front-end development at present. However, in the current audio and video processing field, the development of the audio and video processing function is mainly developed in a traditional native mode. Especially, a large amount of object creation initialization and environment initialization work need to be developed manually, and the existing development mode cannot provide uniform cross-dual development capability, so that a more efficient video effect processing method needs to be provided.
Disclosure of Invention
In view of the above, the present invention proposes a video processing method and a computing device in an attempt to solve, or at least solve, the above existing problems.
According to an aspect of the present invention, there is provided a video processing method adapted to be executed in a computing device, wherein the computing device stores one or more video processing function packages, each video processing function package having a configuration file, the configuration file including a data binding relationship between a standard event and a standard function, and an initialization parameter and a function parameter of the standard function, for realizing a specific video processing effect, the method comprising the steps of: acquiring a corresponding video processing function package according to a preset video processing requirement; analyzing each standard event, each standard function and corresponding initialization parameters from the configuration file of the video processing function package; executing the analyzed standard event on each frame of video image, and judging whether an expected execution result is obtained or not; if so, extracting a function parameter value corresponding to the standard function from the current video image; and executing the corresponding standard function on the current video image based on the initialization parameter and the function parameter value so as to realize the corresponding video processing effect.
Optionally, in the video processing method according to the present invention, the video processing function package further includes a resource file required for executing the processing function; the resource file comprises at least one of texture resources, audio and video resources, shader files, script files and model files.
Optionally, in the video processing method according to the present invention, the video processing function package further includes a resource file required for executing the processing function; the resource file comprises at least one of texture resources, audio and video resources, shader files, script files and model files.
Optionally, in the video processing method according to the present invention, the initialization parameter is a preset fixed parameter value, which includes a resource path of a resource file, a color and a granularity of a mosaic function, and a text content, a font, and a color of a text rendering function; the functional function parameters are dynamic parameter values including coordinate values, width and height of the recognition object.
Optionally, in the video processing method according to the present invention, the configuration file includes a standard event type description, a standard function type description, and a trigger condition and a call parameter of a standard function, and the data binding relationship includes a function parameter that the standard event needs to be transferred to the standard function.
Optionally, in the video processing method according to the present invention, the step of performing a corresponding standard function on the current video image based on the initialization parameter value and the function parameter value includes: judging whether the parameter value of the function meets the triggering condition of the standard function; and if so, executing a corresponding standard function on the current video image based on the initialization parameter value and the function parameter value.
Optionally, in the video processing method according to the present invention, the method further includes a step of generating the video file function package: respectively selecting a corresponding standard event array and a corresponding standard function array from a standard event library and a standard function library according to a preset video processing requirement; establishing a data binding relationship between a standard event array and a standard function array, and configuring an initialization parameter, a function parameter, a trigger condition and a calling parameter of a standard function; and searching and downloading the corresponding resource file according to the initialization parameter.
Optionally, in the video processing method according to the present invention, the standard event includes at least one of an identification event, a user event, a time event, a hardware event, and a model event; the standard function comprises at least one of a rendering function, an application function, a shooting function and an audio and video editing function.
Optionally, in the video processing method according to the present invention, the recognition event includes at least one of license plate recognition, face recognition, vehicle type recognition, vehicle face recognition, and gesture recognition; the user event comprises voice recognition and/or a user interface gesture, and the time event comprises preview time and/or recording time; the hardware event comprises at least one of a GPS acquisition event, a gyroscope data acquisition event, an interrupt event, and the model event comprises an AR model and/or a 3D model.
Optionally, in the video processing method according to the present invention, the rendering function includes at least one of static sticker, text rendering, video watermarking, label rendering, animation sticker, time rendering, particle special effect, video special effect, and mosaic rendering; the application function comprises at least one of audio and video playing, jumping to an H5 page and service broadcasting, and the shooting function comprises at least one of video screen capturing, recording starting and recording pausing; the audio and video editing function comprises at least one of a time special effect, an audio special effect, a speed change special effect and background music.
Optionally, in the video processing method according to the present invention, the video processing function package has a uniform content format, and the configuration file is a JSON format file.
Optionally, in the video processing method according to the present invention, the step of performing the parsed standard event for each frame of the video image includes: after a target object is identified in a certain frame of video image, the target object is tracked in the subsequent video image frame according to an image tracking algorithm, so that the standard event can be quickly executed on other frame of video images.
According to another aspect of the present invention, there is provided a computing device comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs when executed by the processors implement the steps of the video processing method as described above.
According to a further aspect of the invention there is provided a readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, implement the steps of the video processing method as described above.
According to the technical scheme of the invention, based on the existing video processing function, standard event and function definitions are abstractly extracted, and initialization parameters and input and output parameters of event and function standardization are defined. Each packet content can completely describe a video processing effect, so that the function call of event and function initialization can be automatically completed by analyzing the parameters in the function configuration packet. Moreover, the invention adopts the content standard with uniform format, and ensures that the same video function packet can be used at both the android terminal and the IOS terminal and has the same function effect. The invention can realize the local rapid development of the audio and video processing function and the dynamic loading and updating of the audio and video processing function by configuring the video processing description packet in a form of a standard definition format.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a block diagram of a computing device 100, according to one embodiment of the invention;
FIG. 2 is a diagram illustrating generation of a video processing function package according to an embodiment of the present invention
FIG. 3 shows a schematic diagram of a video processing feature pack configuration file according to one embodiment of the invention;
4A-4D illustrate schematic diagrams of various configuration contents in a configuration file, according to one embodiment of the invention; and (c) and (d).
Fig. 5 shows a flow diagram of a video processing method 500 according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 is a block diagram of a computing device 100 according to one embodiment of the invention. In a basic configuration 102, computing device 100 typically includes system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. In some embodiments, application 122 may be arranged to operate with program data 124 on an operating system. The program data 124 comprises instructions, and in the computing device 100 according to the invention the program data 124 comprises instructions for performing the video processing method 200.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 100 may be implemented as a server, such as a file server, a database server, an application server, a WEB server, etc., or as part of a small-form factor portable (or mobile) electronic device, such as a cellular telephone, a Personal Digital Assistant (PDA), a personal media player device, a wireless WEB-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Computing device 100 may also be implemented as a personal computer including both desktop and notebook computer configurations. In some embodiments, the computing device 100 is configured to perform the video processing method 200.
According to an embodiment of the present invention, the computing device 100 may further store one or more video processing function packages, each of which may be a compressed package file having a configuration file and a resource file, wherein the configuration file includes a data binding relationship between a standard event and a standard function, and an initialization parameter and a function parameter of the standard function, for implementing a specific video processing effect.
Fig. 2 shows a schematic diagram of the generation of a video processing function package according to an embodiment of the present invention, which is assembled from a standard event library and a standard function library. The standard event library has a plurality of standard events and the standard function library has a plurality of standard functions. The standard events are all events which can trigger video processing, and the invention unifies and combs input events into unified standard format events. The standard function is that all the audio and video special effects and software and hardware processing capacity are packaged into a unified standard function description.
Specifically, the standard event includes at least one of an identification event, a user event, a time event, a hardware event, and a model event. Specifically, the identification event comprises at least one of license plate identification, face identification, vehicle type identification, vehicle face identification and gesture identification; the user event comprises voice recognition and/or a user interface gesture, and the time event comprises preview time and/or recording time; the hardware event comprises at least one of a GPS acquisition event, a gyroscope data acquisition event, an interrupt event, and the model event comprises an AR model and/or a 3D model.
The standard function is an editing operation capability for the structure or content of the video file, and includes at least one of a rendering function, an application function, a shooting function, and an audio/video editing function. Specifically, the rendering function includes at least one of static sticker, text rendering, video watermarking, label rendering, animation sticker, time rendering, particle special effect, video special effect, and mosaic rendering; the application function comprises at least one of audio and video playing, jumping to an H5 page and service broadcasting; the shooting function comprises at least one of video screen capture, recording start and recording pause; the audio and video editing function comprises at least one of a time special effect, an audio special effect, a speed change special effect and background music.
The video processing function package has a uniform content format, provides a standard resource, data and logic organization form, a single package content can completely describe one video processing function, and one function package can contain one or more video processing functions. For example, one function package can realize the functions of number plate mosaic and display text labels at the same time. A function package can freely select a combination of a plurality of events and a plurality of functions, thereby realizing a finished video processing function. For example, a license plate event and a car face event are recognized at the same time, a mosaic function is added to the license plate, and a decoration function is added to the car face. The function package supports local loading and network loading, and supports dynamic update of a video processing function. Here, the computing device may be communicatively coupled to the server, obtain the latest version of the video processing feature pack, or locally update the video processing feature pack.
Generally, the configuration file is in JSON format and is used to describe specific behavior characteristics of the overall video processing function. Fig. 3 is an example of a configuration file according to an embodiment of the present invention, which includes a standard event type description, a standard function type description, a data binding relationship between a standard event and a standard function, an initialization parameter, and a trigger condition and a call parameter of the standard function. The data binding relationship includes function parameters that the standard event needs to be transferred to the standard function, for example, the information of the region where the number plate is located is identified and output by the number plate, and the information is transferred to the mosaic rendering function to render the current region).
Specifically, the initialization parameters are preset fixed parameter values, which include resource paths (such as picture resource paths, script paths, and the like) of the resource files, color and granularity of the mosaic function, and text content, font, and color of the text rendering function. The function parameters are dynamic parameter values, and change dynamically in real time, and comprise coordinate values, width and height of an identification object, such as license plate position information, vehicle face information and the like identified from a video.
Fig. 4A-4D are an event/function structure description, a data binding relationship field description, an initialization parameter description, and a call condition/parameter description of a configuration file, respectively, according to an embodiment of the present invention. The four described field descriptions are as follows:
Figure BDA0002812064270000071
Figure BDA0002812064270000081
the resource file is various resources required by the function, and the path of the resource file is transmitted to the specific function for use by configuring initialization parameters or function parameters. The resource file comprises at least one of texture resources, audio and video resources, shader files, script files and model files. The texture resource is, for example, a texture image, an audio/video file, for example, a shader file, for example, a file that plays music after recognizing an object, a target object is colored by a shader file, a script file, for example, a model file, for example, a 3D model or an AR model, calculates special effect coordinate values based on coordinate values of the recognized object, and the like.
Based on the method, the video file function package can be generated according to the following steps:
firstly, according to a preset video processing requirement, selecting a corresponding standard event array and a standard function array from a standard event library and a standard function library respectively. As shown in fig. 3, for example, when the target requirement is mosaic printing of the recognized number, a license plate recognition event is selected from the standard event library, and a mosaic rendering function is selected from the standard function library.
And then, establishing a data binding relationship between the standard event array and the standard function array, and configuring initialization parameters, function parameters, triggering conditions and calling parameters of the standard function. For example, a user describes a data binding relationship between "license plate recognition" and "mosaic rendering" in configuration, and a license plate recognition result is bound and transmitted to a mosaic function for drawing a mosaic area, so that a position where a license plate appears is determined, and a logic of a mosaic effect is added. The mosaic effect can be drawn only by determining the logical relationship of the mosaic displayed at the position of the license plate and identifying the position of the license plate in specific frame data after the video acquisition is started.
And finally, searching and downloading the corresponding resource file according to the initialization parameter. The invention downloads the resource file required by each video processing function package in advance, so that the resource file can be directly used when video processing is required, the video processing speed is improved, and the timeliness of video processing is met.
And configuring each video processing function package, namely performing special effect processing on the video according to the function package. Fig. 5 shows a flow diagram of a video processing method 500 according to an embodiment of the invention. Method 500 is performed in a computing device, such as computing device 100. As shown in fig. 5, the method begins at step S510.
In step S510, a corresponding video processing function package is obtained according to a preset video processing requirement. For example, if the target requirement is to print a mosaic on an identified number, a video processing function package of a pre-processed license plate mosaic is obtained, where the function package includes a standard event for identifying a license plate, a standard function for rendering the mosaic, initialization parameters (color, granularity, and the like) for rendering the mosaic, and a data binding relationship between the two (for example, a license plate coordinate parameter to be transmitted).
Subsequently, in step S520, each standard event, standard function, and corresponding initialization parameter are parsed from the configuration file of the video processing function package.
Subsequently, in step S530, the parsed standard event is executed for each frame of video image, and whether an expected execution result is obtained is determined.
The expected execution result is a forward execution result, for example, the expected result of license plate recognition is recognition of a license plate, and a license plate coordinate position is obtained. The expected result of face recognition is to recognize the face and obtain the coordinates of facial features and feature points.
According to one embodiment, the step of performing the parsed standard event for each frame of the video image includes: after a target object is identified in a certain frame of video image, the target object is tracked in the subsequent video image frame according to an image tracking algorithm, so that the standard event can be quickly executed on other frame of video images.
Here, the standard event is executed once for each frame of image, for example, whether a license plate exists in each frame of image is identified, if the target object is not identified in the current frame of image, the processing of the current frame of image is skipped, and the standard event is continuously executed for the next frame of image. If a target object is identified in a certain frame of image, if the position of a license plate is identified, whether the license plate appears in a subsequent image frame or not and the appearance position of the license plate can be continuously tracked according to a target tracking algorithm. By the tracking algorithm, the image recognition efficiency can be improved, and the execution speed of the standard event can be improved.
Subsequently, in step S540, if the expected execution result is obtained, the function parameter value corresponding to the standard function is extracted from the current video image.
Here, the program extracts the event-to-function binding relationship by parsing the user configuration data. Initializing the event and function, connecting the standard event and the standard function specified in the configuration file, and getting through the data transmission relation.
Subsequently, in step S550, a corresponding standard function is performed on the current video image based on the initialization parameter and the function parameter value to achieve a corresponding video processing effect.
For example, in a data processing function package for mosaic printing of a license plate, parameters to be transmitted include coordinates of a top left vertex of the license plate, width and height of the license plate, and the like, and after a license plate recognition event is successfully recognized, the parameters are acquired from the license plate recognition event and are transmitted to a standard function. And simultaneously, performing mosaic rendering on the license plate area at the corresponding position by the standard function according to the analyzed initialization parameters to realize the license plate shielding effect.
Furthermore, after the functional function parameter values are obtained from the standard events, whether the functional function parameter values meet the triggering conditions of the standard functions can be judged; and if so, executing a corresponding standard function on the current video image based on the initialization parameter value and the function parameter value. If not, the standard function is not executed. For example, the triggering condition for mosaic printing of the license plate is that the license plate is in a preset image area, and if the license plate is remote and not in the preset range, the mosaic rendering function is not triggered to be executed.
Optionally, the method 500 may further include a step of generating each event processing function package, where a specific generation process thereof is described in detail above and is not described herein again.
According to the technical scheme of the invention, standard event and function definitions are abstractly extracted, and initialization parameters and input and output parameters of event and function standardization are defined. After the event and the function are unified in standard, the project maintenance cost is reduced, the resource, data and logic packaging are unified, the unified cross-double-end development capability is provided, and the same video function package is ensured, both ends can be used and the function effect is the same. Dynamic update capability to support video processing functions. Thus, by analyzing the parameters in the function configuration package, the automation of event and function initialization and function call can be completed.
A9, the method of A7, wherein the rendering function includes at least one of static sticker, text rendering, video watermarking, label rendering, animation sticker, time rendering, particle special effect, video special effect, mosaic rendering; the application function comprises at least one of audio and video playing, jumping to an H5 page and service broadcasting, and the shooting function comprises at least one of video screen capturing, recording starting and recording stopping; the audio and video editing function comprises at least one of a time special effect, an audio special effect, a speed change special effect and background music.
A10, the method as in any one of A1-A9, wherein the video processing function package has a unified content format and the configuration file is a JSON format file. A11, the method according to any one of a1-a10, wherein the step of performing the parsed standard event for each frame of video image comprises: after a target object is identified in a certain frame of video image, the target object is tracked in the subsequent video image frame according to an image tracking algorithm, so that the standard event can be quickly executed on other frame of video images.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the video processing method of the present invention according to instructions in said program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.

Claims (10)

1. A video processing method adapted to be executed in a computing device in which one or more video processing function packages are stored, each video processing function package having a configuration file including a data binding relationship between a standard event and a standard function, and an initialization parameter and a function parameter of the standard function for realizing a specific video processing effect, the method comprising the steps of:
acquiring a corresponding video processing function package according to a preset video processing requirement;
analyzing each standard event, each standard function and corresponding initialization parameters from the configuration file of the video processing function package;
executing the analyzed standard event on each frame of video image, and judging whether an expected execution result is obtained or not;
if so, extracting a function parameter value corresponding to the standard function from the current video image; and
and executing a corresponding standard function on the current video image based on the initialization parameter and the function parameter value so as to realize a corresponding video processing effect.
2. The method of claim 1, wherein,
the video processing function package also comprises a resource file required for executing the processing function;
the resource file comprises at least one of texture resources, audio and video resources, shader files, script files and model files.
3. The method of claim 2, wherein,
the initialization parameters are preset fixed parameter values and comprise resource paths of the resource files, colors and granularity of the mosaic function, and text contents, fonts and colors of the text rendering function;
the functional function parameters are dynamic parameter values including coordinate values, width and height of the recognition object.
4. The method according to any one of claims 1-3, wherein the configuration file comprises a standard event type description, a standard function type description, and trigger conditions and call parameters of a standard function, and the data binding relationship comprises function parameters that the standard event needs to be passed to the standard function.
5. The method of claim 4, wherein the step of performing the corresponding standard function on the current video image based on the initialization parameter value and the function parameter value comprises:
judging whether the parameter value of the function meets the triggering condition of the standard function;
and if so, executing a corresponding standard function on the current video image based on the initialization parameter value and the function parameter value.
6. The method of claim 4, further comprising the step of generating the video file function package:
respectively selecting a corresponding standard event array and a corresponding standard function array from a standard event library and a standard function library according to a preset video processing requirement;
establishing a data binding relationship between the standard event array and the standard function array, and configuring an initialization parameter, a function parameter, a trigger condition and a calling parameter of the standard function; and
and searching and downloading the corresponding resource file according to the initialization parameter.
7. The method of any one of claims 1-6,
the standard event comprises at least one of an identification event, a user event, a time event, a hardware event and a model event;
the standard function comprises at least one of a rendering function, an application function, a shooting function and an audio and video editing function.
8. The method of claim 7, wherein,
the identification event comprises at least one of license plate identification, face identification, vehicle type identification, vehicle face identification and gesture identification;
the user event comprises voice recognition and/or a user interface gesture, and the time event comprises preview time and/or recording time;
the hardware event comprises at least one of a GPS acquisition event, a gyroscope data acquisition event and an interruption event, and the model event comprises an AR model and/or a 3D model.
9. A computing device, comprising:
at least one processor; and
at least one memory including computer program instructions;
the at least one memory and the computer program instructions are configured to, with the at least one processor, cause the computing device to perform the method of any of claims 1-8.
10. A readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the method of any of claims 1-8.
CN202011399513.6A 2020-12-02 2020-12-02 Video processing method and computing device Pending CN112463250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011399513.6A CN112463250A (en) 2020-12-02 2020-12-02 Video processing method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011399513.6A CN112463250A (en) 2020-12-02 2020-12-02 Video processing method and computing device

Publications (1)

Publication Number Publication Date
CN112463250A true CN112463250A (en) 2021-03-09

Family

ID=74806503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011399513.6A Pending CN112463250A (en) 2020-12-02 2020-12-02 Video processing method and computing device

Country Status (1)

Country Link
CN (1) CN112463250A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986394A (en) * 2021-11-03 2022-01-28 挂号网(杭州)科技有限公司 Event processing method and device, electronic equipment and storage medium
CN115190362A (en) * 2022-09-08 2022-10-14 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120260267A1 (en) * 2011-04-07 2012-10-11 Adobe Systems Incorporated Methods and Systems for Supporting a Rendering API Using a Runtime Environment
CN105979195A (en) * 2016-05-26 2016-09-28 努比亚技术有限公司 Video image processing apparatus and method
CN108717382A (en) * 2018-05-11 2018-10-30 北京奇虎科技有限公司 Audio-video document processing method, device and terminal device based on JSON structures
CN109309868A (en) * 2018-08-19 2019-02-05 朱丽萍 Video file Command Line Parsing system
CN111475676A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video data processing method, system, device, equipment and readable storage medium
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111935528A (en) * 2020-06-22 2020-11-13 北京百度网讯科技有限公司 Video generation method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120260267A1 (en) * 2011-04-07 2012-10-11 Adobe Systems Incorporated Methods and Systems for Supporting a Rendering API Using a Runtime Environment
CN105979195A (en) * 2016-05-26 2016-09-28 努比亚技术有限公司 Video image processing apparatus and method
CN108717382A (en) * 2018-05-11 2018-10-30 北京奇虎科技有限公司 Audio-video document processing method, device and terminal device based on JSON structures
CN109309868A (en) * 2018-08-19 2019-02-05 朱丽萍 Video file Command Line Parsing system
CN111475676A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video data processing method, system, device, equipment and readable storage medium
CN111935528A (en) * 2020-06-22 2020-11-13 北京百度网讯科技有限公司 Video generation method and device
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986394A (en) * 2021-11-03 2022-01-28 挂号网(杭州)科技有限公司 Event processing method and device, electronic equipment and storage medium
CN115190362A (en) * 2022-09-08 2022-10-14 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium
CN115190362B (en) * 2022-09-08 2022-12-27 北京达佳互联信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109358936B (en) Information processing method, device, storage medium, electronic device and system
US20210358192A1 (en) Rendering method and apparatus
CN110851368A (en) Multi-device collaborative testing method and device, computing device and system
CN111966354A (en) Page display method and device and computer readable storage medium
CN112463250A (en) Video processing method and computing device
CN110825467B (en) Rendering method, rendering device, hardware device and computer readable storage medium
CN113704301B (en) Data processing method, device, system, equipment and medium of heterogeneous computing platform
CN111930283B (en) Message display method and computing device
CN112230923A (en) User interface rendering method, user interface rendering device and server
CN110750664A (en) Picture display method and device
CN106445344A (en) Screenshot processing method and device
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN113849096B (en) Desktop display method and device and computing equipment
CN113625930B (en) Data display method, computing device and readable storage medium
CN112684962B (en) Canvas extension method, device, storage medium and terminal
CN113784075A (en) Screen video reading method and system and computing device
CN113554546A (en) Watermark drawing method and device
CN113655973A (en) Page segmentation method and device, electronic equipment and storage medium
US20150128029A1 (en) Method and apparatus for rendering data of web application and recording medium thereof
CN107977451B (en) Method and device for adding dynamic content in display page and terminal equipment
WO2021174538A1 (en) Application processing method and related apparatus
CN113127123B (en) Window effect generation method and computing device
WO2022152159A1 (en) Adaptive ui constraint solving method and related device
WO2023273828A1 (en) Interface management method and apparatus, and device and readable storage medium
CN114090920A (en) Visualized object processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination