CN113947747A - Method, device and equipment for processing monitoring image of vehicle - Google Patents

Method, device and equipment for processing monitoring image of vehicle Download PDF

Info

Publication number
CN113947747A
CN113947747A CN202111028758.2A CN202111028758A CN113947747A CN 113947747 A CN113947747 A CN 113947747A CN 202111028758 A CN202111028758 A CN 202111028758A CN 113947747 A CN113947747 A CN 113947747A
Authority
CN
China
Prior art keywords
steering wheel
hand
action recognition
wheel action
held
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111028758.2A
Other languages
Chinese (zh)
Other versions
CN113947747B (en
Inventor
姜英豪
朱星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Future Phantom Technology Co Ltd
Original Assignee
Wuhan Future Phantom Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Future Phantom Technology Co Ltd filed Critical Wuhan Future Phantom Technology Co Ltd
Priority to CN202111028758.2A priority Critical patent/CN113947747B/en
Publication of CN113947747A publication Critical patent/CN113947747A/en
Application granted granted Critical
Publication of CN113947747B publication Critical patent/CN113947747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a method, a device and equipment for processing a monitoring image of a vehicle, which are used for processing data of the monitoring image so as to improve the convenience of the application of the monitoring image. The application provides a method for processing a monitoring image of a vehicle, which comprises the following steps: acquiring an initial image shot by a camera arranged on a vehicle, wherein the initial image is shot by a steering wheel of the vehicle; inputting the initial image into a hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model carries out hand-held steering wheel action recognition on the initial image, wherein the hand-held steering wheel action recognition model is obtained by taking different images as training initial models of a training set in advance, and the different images are marked with corresponding hand-held steering wheel action recognition results; and extracting a walking steering wheel action recognition result output by the walking steering wheel action recognition model.

Description

Method, device and equipment for processing monitoring image of vehicle
Technical Field
The application relates to the field of monitoring, in particular to a method, a device and equipment for processing a monitoring image of a vehicle.
Background
With the continuous improvement of living standard, the holding amount of the automobiles in China still keeps rising trend, and the drivers of the automobiles need to follow the standard use or driving in the daily use process.
In order to facilitate scene playback and evidence storage, a camera can be arranged in the vehicle to provide a monitoring video capable of directly reflecting the current situation, and under the monitoring video, the content outside the lower vehicle window and inside the vehicle can be reserved, so that the current situation can be really restored.
In the existing research process of the related technology, the inventor finds that file searching is complicated when the monitoring video is called for related processing in the later period, namely, the problem of inconvenient application exists.
Disclosure of Invention
The application provides a method, a device and equipment for processing a monitoring image of a vehicle, which are used for processing data of the monitoring image so as to improve the convenience of the application of the monitoring image.
In a first aspect, the present application provides a method for processing a monitoring image of a vehicle, the method comprising:
acquiring an initial image shot by a camera arranged on a vehicle, wherein the initial image is shot by a steering wheel of the vehicle;
inputting the initial image into a hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model carries out hand-held steering wheel action recognition on the initial image, wherein the hand-held steering wheel action recognition model is obtained by taking different images as training initial models of a training set in advance, and the different images are marked with corresponding hand-held steering wheel action recognition results;
and extracting a walking steering wheel action recognition result output by the walking steering wheel action recognition model.
With reference to the first aspect of the present application, in a first possible implementation manner of the first aspect of the present application, the method for identifying a walking steering wheel action of an initial image includes a plurality of sub-images in an image set form or a video form, and inputs the initial image into a walking steering wheel action identification model, so that the walking steering wheel action identification model identifies the walking steering wheel action of the initial image, and includes:
sequentially inputting the plurality of sub-images into the hand steering wheel action recognition model, so that the hand steering wheel action recognition model sequentially identifies the plurality of sub-images by hand steering wheel action;
after the walking steering wheel action recognition result output by the walking steering wheel action recognition model is extracted, the method further comprises the following steps:
and respectively corresponding walking steering wheel action recognition results of a plurality of sub-images output by the walking steering wheel action recognition model are fused, and finally output walking steering wheel action recognition results are determined.
With reference to the first possible implementation manner of the first aspect of the present application, in a second possible implementation manner of the first aspect of the present application, the determining a final outputted hand steering wheel motion recognition result by fusing hand steering wheel motion recognition results corresponding to different sub-images outputted by the hand steering wheel motion recognition model includes:
and if the target hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the different sub-images, determining the target hand-held steering wheel action recognition result as the final output hand-held steering wheel action recognition result.
With reference to the second possible implementation manner of the first aspect of the present application, in a third possible implementation manner of the first aspect of the present application, the determining, by using images of preset frame numbers as the identification unit, a target hand-held steering wheel motion identification result as a final output hand-held steering wheel motion identification result if a target hand-held steering wheel motion identification result exists in the hand-held steering wheel motion identification results respectively corresponding to different sub-images, where the number ratio of the target hand-held steering wheel motion identification result is greater than a preset ratio threshold value includes:
if first hand-held steering wheel action recognition results with the quantity ratio larger than a preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of preset frames, determining the first hand-held steering wheel action recognition results as final output hand-held steering wheel action recognition results;
if the first hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value do not exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of the preset frames, continuing to recognize the sub-images of the rest frames until second hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value exist, and determining that the second hand-held steering wheel action recognition results are finally output hand-held steering wheel action recognition results;
and if the second hand-held steering wheel action recognition result does not exist, determining that the third hand-held steering wheel action recognition result with the largest number ratio is the final output hand-held steering wheel action recognition result.
With reference to the first aspect of the present application, in a fourth possible implementation manner of the first aspect of the present application, the hand-held steering wheel action recognition result includes a two-hand-off steering wheel action recognition result, a one-left hand-held steering wheel action recognition result, a one-right hand-held steering wheel action recognition result, a two-hand normal steering wheel action recognition result, and a two-hand crossed steering wheel action recognition result.
With reference to the first aspect of the present application, in a fifth possible implementation manner of the first aspect of the present application, the method further includes, before the initial image is input into the hand-held steering wheel motion recognition model, performing hand-held steering wheel motion recognition on the initial image by using the hand-held steering wheel motion recognition model, that:
and preprocessing the initial image, wherein the preprocessing comprises resolution conversion processing for scaling the resolution of the initial image to a preset resolution and normalization processing for converting the initial image into a preset standard form.
With reference to the first aspect of the present application, in a sixth possible implementation manner of the first aspect of the present application, the method further includes, before the initial image is input into the hand-held steering wheel motion recognition model, performing hand-held steering wheel motion recognition on the initial image by using the hand-held steering wheel motion recognition model, that:
and finishing the deployment of the hand-held steering wheel action recognition model by using a C/C + + programming language.
In a second aspect, the present application provides a device for processing a monitoring image of a vehicle, the device comprising:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring an initial image obtained by shooting a camera arranged on a vehicle, and the initial image is obtained by shooting a steering wheel of the vehicle;
the hand-held steering wheel motion recognition model is obtained by training the initial model in advance through different images serving as a training set, and the different images are marked with corresponding hand-held steering wheel motion recognition results;
and the extraction unit is used for extracting the walking steering wheel action recognition result output by the walking steering wheel action recognition model.
With reference to the second aspect of the present application, in a first possible implementation manner of the second aspect of the present application, the initial image includes a plurality of sub-images in the form of an image set or in the form of a video, and the identifying unit is specifically configured to:
sequentially inputting the plurality of sub-images into the hand steering wheel action recognition model, so that the hand steering wheel action recognition model sequentially identifies the plurality of sub-images by hand steering wheel action;
the apparatus further comprises a determining unit for:
and respectively corresponding walking steering wheel action recognition results of a plurality of sub-images output by the walking steering wheel action recognition model are fused, and finally output walking steering wheel action recognition results are determined.
With reference to the first possible implementation manner of the second aspect of the present application, in a second possible implementation manner of the second aspect of the present application, the determining unit is specifically configured to:
and if the target hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the different sub-images, determining the target hand-held steering wheel action recognition result as the final output hand-held steering wheel action recognition result.
With reference to the second possible implementation manner of the second aspect of the present application, in a third possible implementation manner of the second aspect of the present application, the determining unit, with the preset number of frames of the images as the identification unit, is specifically configured to:
if first hand-held steering wheel action recognition results with the quantity ratio larger than a preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of preset frames, determining the first hand-held steering wheel action recognition results as final output hand-held steering wheel action recognition results;
if the first hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value do not exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of the preset frames, continuing to recognize the sub-images of the rest frames until second hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value exist, and determining that the second hand-held steering wheel action recognition results are finally output hand-held steering wheel action recognition results;
and if the second hand-held steering wheel action recognition result does not exist, determining that the third hand-held steering wheel action recognition result with the largest number ratio is the final output hand-held steering wheel action recognition result.
With reference to the second aspect of the present application, in a fourth possible implementation manner of the second aspect of the present application, the hand-held steering wheel action recognition result includes a two-hand-off steering wheel action recognition result, a one-left hand-held steering wheel action recognition result, a one-right hand-held steering wheel action recognition result, a two-hand normal steering wheel action recognition result, and a two-hand cross-holding steering wheel action recognition result.
With reference to the second aspect of the present application, in a fifth possible implementation manner of the second aspect of the present application, the apparatus further includes a preprocessing unit, configured to:
and preprocessing the initial image, wherein the preprocessing comprises resolution conversion processing for scaling the resolution of the initial image to a preset resolution and normalization processing for converting the initial image into a preset standard form.
With reference to the second aspect of the present application, in a sixth possible implementation manner of the second aspect of the present application, the apparatus further includes a pre-deployment unit, configured to:
and finishing the deployment of the hand-held steering wheel action recognition model by using a C/C + + programming language.
In a third aspect, the present application provides a device for processing a monitoring image of a vehicle, including a processor and a memory, where the memory stores a computer program, and the processor executes the method provided in the first aspect or any one of the possible implementation manners of the first aspect when calling the computer program in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method provided in the first aspect of the present application or any one of the possible implementations of the first aspect of the present application.
From the above, the present application has the following advantageous effects:
for a monitoring image acquired by a vehicle, the application provides a processing method, after an initial image obtained by shooting by a camera arranged on the vehicle is acquired, the initial image is input into a hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model carries out hand-held steering wheel action recognition on the initial image, a hand-held steering wheel action recognition result output by the hand-held steering wheel action recognition model is extracted, in the process, the relevant condition of the current hand-held steering wheel of a driver of the shot image is accurately determined from a data processing layer through the application of a neural network, further, the relevant condition of the current hand-held steering wheel of the driver of the shot image can be provided while the monitoring image is acquired, more accurate and effective data support is provided, and the problem that file searching is complicated when the monitoring image is called to check the relevant condition of the current hand-held steering wheel of the driver at the later stage is avoided, the convenience of the monitoring image in application is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for processing a monitoring image of a vehicle according to the present application;
FIG. 2 is a schematic flow chart of another method for processing a monitoring image of a vehicle according to the present application;
FIG. 3 is a schematic flow chart illustrating a process of determining a final output hand-held steering wheel motion recognition result according to the present application;
FIG. 4 is a schematic diagram of a monitoring image processing device of a vehicle according to the present application;
fig. 5 is a schematic structural diagram of a monitoring image processing device of a vehicle according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus. The naming or numbering of the steps appearing in the present application does not mean that the steps in the method flow have to be executed in the chronological/logical order indicated by the naming or numbering, and the named or numbered process steps may be executed in a modified order depending on the technical purpose to be achieved, as long as the same or similar technical effects are achieved.
The division of the modules presented in this application is a logical division, and in practical applications, there may be another division, for example, multiple modules may be combined or integrated into another system, or some features may be omitted, or not executed, and in addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, and the indirect coupling or communication connection between the modules may be in an electrical or other similar form, which is not limited in this application. The modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the present disclosure.
Before describing the method for processing the monitoring image of the vehicle provided by the present application, the background related to the present application will be described first.
The method and the device for processing the monitoring image of the vehicle and the computer readable storage medium can be applied to a processing device of the monitoring image of the vehicle, and are used for processing data of the monitoring image, so that the convenience of the application of the monitoring image is improved.
In the method for processing the monitoring image of the vehicle, an execution main body may be a processing device of the monitoring image of the vehicle, or a processing device such as a server, a physical host, or a User Equipment (UE) integrated with the processing device of the monitoring image of the vehicle. The processing device of the monitoring image of the vehicle may be implemented in a hardware or software manner, the UE may specifically be a terminal device such as a smart phone, a tablet computer, a notebook computer, a desktop computer, or a Personal Digital Assistant (PDA), and the processing device of the monitoring image of the vehicle may be set in a device cluster manner.
As an example, in a personal use scene of a vehicle owner, the processing device of the monitoring image of the vehicle may specifically be UE at the hand of the user, typically, a smart phone, and in a daily use process of the vehicle, when the user determines or determines that the vehicle has a possible accident such as a collision or triggers another image playback requirement, the UE may read a related monitoring image through connection with a vehicle system or a camera, and view a related condition that the driver holds the steering wheel by using the processing method of the monitoring image of the vehicle provided by the present application.
As another example, in addition to the use scene of the owner of the vehicle, the driving test scene may also be a driving test scene, and no matter the driving test practice scene or the driving test scene, the driving test scene may involve the requirement on the driving specification of the driver, and in this case, the related situation of the driving test person holding the steering wheel at that time may be restored by the processing method of the monitoring image of the vehicle provided by the present application through the desktop, the notebook, and the like deployed in the driving test scene, so as to provide a direct and effective data support for the driving test scene.
Of course, the above two scenarios are only examples, and in practical application, the method for processing the monitoring image of the vehicle provided by the present application may be applied through different devices according to specific application requirements, and is not specifically limited herein.
The method comprises the steps of obtaining a monitoring image, wherein the monitoring image can be a monitoring video, namely an original video shot by a camera on a vehicle, and the file form of data of the monitoring image is a video file form obtained by combining a plurality of images; or, the monitored monitoring image may also be an image obtained by processing the monitoring video, such as an image obtained by analyzing and disassembling the monitoring video, or an image captured from the monitoring video. That is to say, the monitoring image according to the present application may specifically include a plurality of sub-images in the form of an image set or in the form of a video, and the processing method of the monitoring image of the vehicle according to the present application performs specific image processing with the sub-images as objects.
Next, a method for processing a monitoring image of a vehicle according to the present application will be described.
First, referring to fig. 1, fig. 1 shows a schematic flow chart of a method for processing a monitoring image of a vehicle according to the present application, and the method for processing a monitoring image of a vehicle according to the present application may specifically include the following steps:
step S101, acquiring an initial image shot by a camera arranged on a vehicle, wherein the initial image is shot by a steering wheel of the vehicle;
it is understood that in the present application, the acquisition of the initial image may be understood as both the real-time shooting action of the image, such as shooting a video, and capturing the image at a preset time interval, for example.
Alternatively, it may also be understood as a reading and extracting action of a stored image, such as reading a previously stored or captured image from a local storage space, or reading a previously stored or captured image from another device.
It can be seen from this that, both the camera and the device storing the initial image can be covered in the scope of the device for processing the monitoring image of the vehicle according to the present application, and the device form of the device for processing the monitoring image of the vehicle according to the present application can be specifically adjusted according to the actual needs.
In addition, the image processing according to the present application aims to recognize the specific hand-held steering wheel action of the driver, and therefore the initial image is specifically obtained by capturing the steering wheel of the vehicle.
Of course, the image content in the initial image may include other content, such as the content of other spatial positions, such as the driver's seat, the vehicle door, and the passenger seat, in addition to the steering wheel and the driver's hand on the steering wheel.
Step S102, inputting an initial image into a hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model carries out hand-held steering wheel action recognition on the initial image, wherein the hand-held steering wheel action recognition model is obtained by training the initial model by taking different images as a training set in advance, and different images are marked with corresponding hand-held steering wheel action recognition results;
in the training process, a worker generally configures a training set, wherein the training set is different images marked with corresponding hand-held steering wheel action recognition results, then the different images are sequentially input into a model, and the model performs image recognition processing on the input images to recognize included hand-held steering wheel actions so as to realize forward propagation of the model; and calculating a loss function according to the action recognition result of the hand-held steering wheel output by the model, optimizing the parameters of the model according to the calculation result of the loss function, realizing the back propagation of the model, and completing the training of the model when preset training requirements such as training duration, training time and recognition accuracy are met through multi-round training.
For example, a large amount of data of the hand-held steering wheel (including different light, different weather, different scenes, different sexes of drivers, different ages, different clothes, different accessories on hands and the like) can be prepared in the early stage and divided into a training set, a verification set and a test set according to the ratio of 8:1: 1.
The training set and the training process of the model are carried out according to the actions of the hand-held steering wheel in the image, so that the trained model has a targeted recognition effect on the actions of the hand-held steering wheel contained in the input image and can be applied to recognition processing of the actions of the hand-held steering wheel, and the model is called as a hand-held steering wheel action recognition model and can recognize specific actions of the hand-held steering wheel contained in the input image.
The walking steering wheel action recognition model can be different types of neural network models such as ResNet, YolloV 5s and ShuffleNet, can be an existing model, can be obtained by improving the existing model, and can even be an independently built model, and in practical application, a model which achieves the best effect in a walking steering wheel action recognition scene is selected for application from the aspects of accuracy, network complexity of the model, calculation speed, equipment power consumption, memory requirements and the like.
As an example, the hand-held steering wheel motion recognition model can recognize hand-held steering wheel motions, which may specifically include a two-hand off-steering-wheel motion, a one-left-hand steering-wheel motion, a one-right-hand steering-wheel motion, a two-hand normal steering-wheel motion, and a two-hand cross steering-wheel motion.
Correspondingly, the hand-held steering wheel action recognition result output by the hand-held steering wheel action recognition model may specifically include a hand-off steering wheel action recognition result, a single-left hand-held steering wheel action recognition result, a single-right hand-held steering wheel action recognition result, a normal-holding-by-both-hand steering wheel action recognition result, and a crossed-holding-by-both-hand steering wheel action recognition result.
In addition, before the image is input into the hand-held steering wheel action recognition model, the initial image can be preprocessed, the preprocessing comprises resolution conversion processing for scaling the resolution of the initial image to the preset resolution and normalization processing for converting the initial image into the preset standard form, and the application of the preprocessing can be understood to be used for normalizing the input image of the model, ensure that the model can accurately and stably recognize the input image, and further accurately recognize the hand-held steering wheel action of the input image.
For example, the image captured by the camera with the image resolution of 1080p can be scaled to 224 × 224, the data processing scale is unified and reduced, and then the image is normalized, so that the image is still three-channel and does not need to be grayed.
And step S103, extracting a walking steering wheel action recognition result output by the walking steering wheel action recognition model.
After the walking steering wheel action recognition model completes recognition processing of the walking steering wheel action in the image on the input initial image, the output walking steering wheel action recognition result can be extracted.
At this time, the recognition result can be output in a form of remote storage, for example, the recognition processing of the hand-held steering wheel action can be completed locally, and then the recognition result is uploaded to a server for storage, and the data storage of the hand-held steering wheel action of the driver in the scene corresponding to the image related at this time is completed at the cloud.
Of course, the recognition result may be output in a locally stored form.
Or, the local device can be used as an output carrier, and the identification result is output to the relevant user of the device for the user to view.
It can be seen from the above embodiments that, for a monitoring image acquired by a vehicle, the present application provides a processing method, after an initial image captured by a camera disposed on the vehicle is acquired, the initial image is input into a hand-held steering wheel motion recognition model, so that the hand-held steering wheel motion recognition model performs hand-held steering wheel motion recognition on the initial image, and a hand-held steering wheel motion recognition result output by the hand-held steering wheel motion recognition model is extracted, in the process, through the application of a neural network, the relevant condition of the driver's hand-held steering wheel at the time of capturing the image is accurately determined from a data processing level, and further, while the monitoring image is acquired, the relevant condition of the driver's hand-held steering wheel at the time of capturing the image can be provided, more accurate and effective data support is provided, and the problem of complicated file search when the monitoring image is called to check the relevant condition of the driver's hand-held steering wheel at the time is avoided, the convenience of the monitoring image in application is improved.
In addition, it has been mentioned above that, in practical applications, the initial image obtained in step S101 may specifically be in the form of an image set or in the form of a video, and in the above, step S102 may be generally understood as inputting N images to the hand-held steering wheel motion recognition model in sequence, and then sequentially extracting and outputting a plurality of hand-held steering wheel motion recognition results output by the model.
In this case, the present application also considers that, in the identification processing of the walking steering wheel action of the initial image, specifically, the final identification result may be determined from the overall perspective, so as to further improve the processing effect and the use effect of the monitoring image processing method of the vehicle of the present application on the initial image.
Referring to fig. 2, another schematic flow chart of the monitoring image processing method for a vehicle according to the present application may include the following steps S201 to S204:
step S201, acquiring an initial image shot by a camera arranged on a vehicle, wherein the initial image comprises a plurality of sub-images in an image set form or a video form;
for the acquisition of the initial image in step 201, reference may be made to the content of step S101 and the like in the embodiment corresponding to fig. 1, which is not described herein again in detail.
Step S202, sequentially inputting a plurality of sub-images into a hand-held steering wheel action recognition model, and enabling the hand-held steering wheel action recognition model to sequentially recognize the hand-held steering wheel actions of the sub-images;
it can be found that, in the process of applying the model, the identification process of the overall walking steering wheel action of the initial image includes the step of inputting a plurality of sub-images included in the initial image into the model to be respectively identified.
Step S203, extracting walking steering wheel action recognition results corresponding to a plurality of sub-images output by the walking steering wheel action recognition model respectively;
at this time, from the overall viewpoint, it is possible to extract the hand-held steering wheel motion recognition results obtained by the model for the plurality of sub-images sequentially input.
And S204, fusing the hand steering wheel action recognition results corresponding to the plurality of sub-images output by the hand steering wheel action recognition model respectively, and determining the final output hand steering wheel action recognition result.
After the hand-held steering wheel action recognition results corresponding to the sub-images output by the model are extracted, the hand-held steering wheel action recognition results can be fused from the whole layer to form a new hand-held steering wheel action recognition result which is also used for final output.
It can be understood that for a single sub-image, in practical application, different walking steering wheel action recognition results may exist between other sub-images, which may be an objective reason of the technology or a special reason of the image corresponding to the actual scene, and for these abnormal situations, after determining the final walking action recognition result from the overall level, the present application can effectively overcome the recognition error caused by the abnormal situation, and avoid the fluctuation influence caused by the abnormal recognition result.
In the process of fusing the recognition results, modes such as deleting, ignoring and the like can be generally adopted, the recognition result with less invalid proportion (generally caused by abnormal recognition) is kept as the recognition result which is finally output.
For example, in combination with the actual operation experience of the hand-held steering wheel motion recognition model, a preset duty ratio threshold may be configured, and in step S204, if there is a target hand-held steering wheel motion recognition result with a number duty ratio greater than the preset duty ratio threshold in the hand-held steering wheel motion recognition results corresponding to different sub-images, the target hand-held steering wheel motion recognition result is determined as the final output hand-held steering wheel motion recognition result.
It can be understood that the preset occupancy ratio threshold may be manually configured by a worker according to actual operation experience, or may be adjusted and determined by the device itself under a dynamic adjustment mechanism in the case that the device identifies an identification abnormality in combination with the worker, so that better flexibility and higher identification accuracy can be achieved.
On the basis of the preset ratio threshold, as an example, the method may specifically be configured such that the final recognition result is determined by using the images with the preset frame number as the recognition unit for the multiple sub-images, and at this time, if there is a first hand-steered wheel motion recognition result with a quantity ratio larger than the preset ratio threshold in the hand-steered wheel motion recognition results respectively corresponding to the sub-images with the first half of the preset frame number, it is determined that the first hand-steered wheel motion recognition result is the final output hand-steered wheel motion recognition result;
if the first hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value do not exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of the preset frames, continuing to recognize the sub-images of the rest frames until second hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value exist, and determining that the second hand-held steering wheel action recognition results are finally output hand-held steering wheel action recognition results;
and if the second hand-held steering wheel action recognition result does not exist, determining that the third hand-held steering wheel action recognition result with the largest number ratio is the final output hand-held steering wheel action recognition result.
It can be understood that, taking a USB camera as an example, if the video taken by the USB camera is 30 Frames Per Second (FPS), that is, 30 images are acquired Per Second, this fact can set the preset frame number to 60 Frames, and can set the preset percentage threshold to 60%, considering that there may be abnormal recognition in actual operation, for example, the recognition results of the images of adjacent Frames are different (actually, the same recognition result should be obtained), the present application can perform smoothing processing based on the consideration that the actions are consistent during actual driving, under the limitation of the frame number of the sub-images, the recognition results of 30 to 60 consecutive images can be processed once, then the recognition result of the highest number of times in the time period (1s to 2s) can be obtained, and the number of the recognition results output in less than 2s exceeds 60% of the total number, that is, one report result in 1s to 2s is obtained, from the practical operation, the accuracy of the finally output identification result can reach 99%, and the method can be conveniently used in practical application.
If the recognition processing of the hand-held steering wheel motion recognition model is understood as the classification processing of the hand-held steering wheel motion, the recognizable preset hand-held steering wheel motion can be recorded as different classification results, and the above-mentioned two-hand off-steering wheel motion recognition result, one-left hand-held steering wheel motion recognition result, one-right hand-held steering wheel motion recognition result, two-hand normal-holding steering wheel motion recognition result, and two-hand cross-holding steering wheel motion recognition result can be recorded as 5 classification results N1, N2, N3, N4, and N5.
In this classification scenario, the number of classification results output by the hand-steered wheel motion recognition model for the input sub-images N1, N2, N3, N4, and N5 for each class may be recorded, where the total number of sub-images N is N1+ N2+ N3+ N4+ N5, and in the process of fusing the classification results, the maximum value among N1, N2, N3, N4, and N5 is found and recorded as N _ max.
Taking the initial total number N of sub-images as 30 (corresponding to the initial image with 1s time span), it is understood by referring to a flow diagram of the present application shown in fig. 3, which determines the final output hand-held steering wheel action recognition result (the final reported classification result of hand-held steering wheel actions).
And when the N _ max is larger than N60%, if the N _ max of 18 is obtained, outputting a corresponding classification result.
If the number of the sub-images reaches 30 frames and N _ max is not more than 18, continuing to process the sub-images of the 32 th frame, counting the values of each of the classes N1, N2, N3, N4 and N5, finding out the maximum value, and outputting the corresponding recognition result when the maximum value is more than 31 x 60 percent, namely 18 is rounded;
if N _ max is not greater than 18 by 31 frames, processing continues with sub-picture … of frame 32
The loop iteration is carried out until the sub-image of the M (30< ═ M <60) th frame appears, and when N _ max > M is 60%, the corresponding classification result is reported.
When M is 60, the maximum value of N1, N2, N3, N4, and N5 can be directly found out, and type report is performed, that is, at least 1 second (at least 30 frames of sub-images) reports the classification result once, the maximum duration is no longer than 2 seconds (at most 60 frames of sub-images) will be reported once without fail, and the parameter of the frame number (corresponding to the time interval) can be adjusted according to the actual situation.
In addition, as another example, the present application also considers a processing method of a monitoring image of a vehicle provided by the present application at a development level, which may involve deploying an embedded flat and embedded device platform in an actual application process, and in order to facilitate deployment of the embedded flat and embedded device, in an actual application, the present application may use a C/C + + programming language to complete deployment of a walking steering wheel motion recognition model, so that the walking steering wheel model may have better generality and compatibility in deployment processing, and thus has better application value.
The above is an introduction of a method for processing a monitoring image of a vehicle provided by the present application, and in order to better implement the method for processing the monitoring image of the vehicle provided by the present application, the present application further provides a device for processing the monitoring image of the vehicle from the perspective of a functional module.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a processing device for monitoring images of a vehicle according to the present application, in which the processing device 400 for monitoring images of a vehicle may specifically include the following structures:
an acquisition unit 401 configured to acquire an initial image captured by a camera disposed on a vehicle, the initial image being captured of a steering wheel of the vehicle;
the identification unit 402 is configured to input an initial image into a hand-held steering wheel motion identification model, so that the hand-held steering wheel motion identification model performs hand-held steering wheel motion identification on the initial image, the hand-held steering wheel motion identification model is obtained by training the initial model in advance through different images serving as a training set, and the different images are labeled with corresponding hand-held steering wheel motion identification results;
an extracting unit 403, configured to extract a walking steering wheel action recognition result output by the walking steering wheel action recognition model.
In an exemplary implementation, the initial image includes a plurality of sub-images in the form of an image set or in the form of a video, and the identifying unit 402 is specifically configured to:
sequentially inputting the plurality of sub-images into the hand steering wheel action recognition model, so that the hand steering wheel action recognition model sequentially identifies the plurality of sub-images by hand steering wheel action;
the apparatus further comprises a determining unit 404 for:
and respectively corresponding walking steering wheel action recognition results of a plurality of sub-images output by the walking steering wheel action recognition model are fused, and finally output walking steering wheel action recognition results are determined.
In another exemplary implementation manner, the determining unit 404 is specifically configured to:
and if the target hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the different sub-images, determining the target hand-held steering wheel action recognition result as the final output hand-held steering wheel action recognition result.
With reference to the second possible implementation manner of the second aspect of the present application, in a third possible implementation manner of the second aspect of the present application, the determining unit 404 is specifically configured to:
if first hand-held steering wheel action recognition results with the quantity ratio larger than a preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of preset frames, determining the first hand-held steering wheel action recognition results as final output hand-held steering wheel action recognition results;
if the first hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value do not exist in the hand-held steering wheel action recognition results respectively corresponding to the sub-images with the first half of the preset frames, continuing to recognize the sub-images of the rest frames until second hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value exist, and determining that the second hand-held steering wheel action recognition results are finally output hand-held steering wheel action recognition results;
and if the second hand-held steering wheel action recognition result does not exist, determining that the third hand-held steering wheel action recognition result with the largest number ratio is the final output hand-held steering wheel action recognition result.
In yet another exemplary implementation, the holding steering wheel action recognition results include a hands-off steering wheel action recognition result, a left-handed holding steering wheel action recognition result, a right-handed holding steering wheel action recognition result, a normal-hands holding steering wheel action recognition result, and a cross-hands holding steering wheel action recognition result.
In yet another exemplary implementation, the apparatus further comprises a preprocessing unit 405 for:
and preprocessing the initial image, wherein the preprocessing comprises resolution conversion processing for scaling the resolution of the initial image to a preset resolution and normalization processing for converting the initial image into a preset standard form.
In yet another exemplary implementation, the apparatus further includes a pre-deployment unit 406 configured to:
and finishing the deployment of the hand-held steering wheel action recognition model by using a C/C + + programming language.
The present application further provides a processing device for a monitoring image of a vehicle from a hardware structure perspective, referring to fig. 5, fig. 5 shows a schematic structural diagram of the processing device for the monitoring image of the vehicle of the present application, specifically, the processing device for the monitoring image of the vehicle of the present application may include a processor 501, a memory 502, and an input/output device 503, where the processor 501 is configured to implement steps of a processing method for the monitoring image of the vehicle in the corresponding embodiment of fig. 1 when executing a computer program stored in the memory 502; alternatively, the processor 501 is configured to implement the functions of the units in the embodiment corresponding to fig. 4 when executing the computer program stored in the memory 502, and the memory 502 is configured to store the computer program required by the processor 501 to execute the method for processing the monitoring image of the vehicle in the embodiment corresponding to fig. 1.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in memory 502 and executed by processor 501 to accomplish the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The processing device of the monitoring image of the vehicle may include, but is not limited to, a processor 501, a memory 502, and an input-output device 503. It will be understood by those skilled in the art that the illustration is merely an example of a processing device of a monitoring image of a vehicle, and does not constitute a limitation of the processing device of the monitoring image of the vehicle, and may include more or less components than those illustrated, or combine some components, or different components, for example, the processing device of the monitoring image of the vehicle may further include a network access device, a bus, etc., and the processor 501, the memory 502, the input-output device 503, etc. are connected by the bus.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the processing device of the monitored images of the vehicle, the various parts of the overall device being connected by various interfaces and lines.
The memory 502 may be used to store computer programs and/or modules, and the processor 501 may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory 502, as well as invoking data stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from use of a processing device of a monitoring image of the vehicle, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The processor 501, when executing the computer program stored in the memory 502, may specifically implement the following functions:
acquiring an initial image shot by a camera arranged on a vehicle, wherein the initial image is shot by a steering wheel of the vehicle;
inputting the initial image into a hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model carries out hand-held steering wheel action recognition on the initial image, wherein the hand-held steering wheel action recognition model is obtained by taking different images as training initial models of a training set in advance, and the different images are marked with corresponding hand-held steering wheel action recognition results;
and extracting a walking steering wheel action recognition result output by the walking steering wheel action recognition model.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the above-described specific working processes of the apparatus and the device for processing a monitoring image of a vehicle and the corresponding units thereof may refer to the description of the method for processing a monitoring image of a vehicle in the embodiment corresponding to fig. 1, and are not described herein again in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
For this reason, the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps of the method for processing the monitoring image of the vehicle in the embodiment corresponding to fig. 1 in the present application, and specific operations may refer to the description of the method for processing the monitoring image of the vehicle in the embodiment corresponding to fig. 1, which is not repeated herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps of the method for processing the monitoring image of the vehicle in the embodiment corresponding to fig. 1, the beneficial effects that can be achieved by the method for processing the monitoring image of the vehicle in the embodiment corresponding to fig. 1 can be achieved, and the detailed description is omitted here.
The method, the apparatus, the device and the computer-readable storage medium for processing the monitoring image of the vehicle provided by the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for processing a monitoring image of a vehicle, the method comprising:
acquiring an initial image shot by a camera arranged on a vehicle, wherein the initial image is shot by a steering wheel of the vehicle;
inputting the initial image into a hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model performs hand-held steering wheel action recognition on the initial image, wherein the hand-held steering wheel action recognition model is obtained by training the initial model by taking different images as a training set in advance, and the different images are marked with corresponding hand-held steering wheel action recognition results;
and extracting a walking steering wheel action recognition result output by the walking steering wheel action recognition model.
2. The method of claim 1, wherein the initial image comprises a plurality of sub-images in the form of an image set or a video, and the inputting the initial image into a hand-held steering wheel motion recognition model enables the hand-held steering wheel motion recognition model to perform hand-held steering wheel motion recognition on the initial image comprises:
sequentially inputting the sub-images into the hand-held steering wheel action recognition model, so that the hand-held steering wheel action recognition model sequentially identifies the sub-images in the hand-held steering wheel action;
after the walking steering wheel action recognition result output by the walking steering wheel action recognition model is extracted, the method further comprises the following steps:
and respectively corresponding walking steering wheel action recognition results of the plurality of sub-images output by the walking steering wheel action recognition model are fused, and finally output walking steering wheel action recognition results are determined.
3. The method according to claim 2, wherein the step of fusing the hand steering wheel motion recognition results corresponding to the different sub-images output by the hand steering wheel motion recognition model to determine a final output hand steering wheel motion recognition result comprises:
and if target hand-held steering wheel action recognition results with the quantity ratio larger than a preset ratio threshold exist in the hand-held steering wheel action recognition results respectively corresponding to the different sub-images, determining the target hand-held steering wheel action recognition result as a final output hand-held steering wheel action recognition result.
4. The method according to claim 3, wherein the plurality of sub-images take images with preset frame numbers as recognition units, and if there is a target hand-held steering wheel motion recognition result with a number ratio larger than a preset ratio threshold in the hand-held steering wheel motion recognition results respectively corresponding to the different sub-images, determining the target hand-held steering wheel motion recognition result as a final output hand-held steering wheel motion recognition result, includes:
if first hand-held steering wheel action recognition results with the quantity ratio larger than a preset ratio threshold exist in hand-held steering wheel action recognition results respectively corresponding to the first half of sub-images with the preset frame number, determining that the first hand-held steering wheel action recognition results are the final output hand-held steering wheel action recognition results;
if the first hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value do not exist in the hand-held steering wheel action recognition results respectively corresponding to the first half of the sub-images with the preset frame number, continuing to recognize the sub-images of the rest frames until second hand-held steering wheel action recognition results with the quantity ratio larger than the preset ratio threshold value exist, and determining that the second hand-held steering wheel action recognition results are the final output hand-held steering wheel action recognition results;
and if the second hand-held steering wheel action recognition result does not exist, determining that the third hand-held steering wheel action recognition result with the largest number ratio is the final output hand-held steering wheel action recognition result.
5. The method of claim 1, wherein the hand-held steering wheel action recognition results comprise a hands-off steering wheel action recognition result, a left-handed hand-held steering wheel action recognition result, a right-handed hand-held steering wheel action recognition result, a two-handed normal steering wheel action recognition result, and a two-handed crossed steering wheel action recognition result.
6. The method of claim 1, wherein before inputting the initial image into a hand-held steering wheel motion recognition model such that the hand-held steering wheel motion recognition model performs hand-held steering wheel motion recognition on the initial image, the method further comprises:
and preprocessing the initial image, wherein the preprocessing comprises a resolution conversion processing of scaling the resolution of the initial image to a preset resolution and a normalization processing of converting the initial image to a preset standard form.
7. The method of claim 1, wherein before inputting the initial image into a hand-held steering wheel motion recognition model such that the hand-held steering wheel motion recognition model performs hand-held steering wheel motion recognition on the initial image, the method further comprises:
and finishing the deployment of the walking steering wheel action recognition model by using a C/C + + programming language.
8. A device for processing a monitor image of a vehicle, the device comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an initial image obtained by shooting a camera arranged on a vehicle, and the initial image is obtained by shooting a steering wheel of the vehicle;
the identification unit is used for inputting the initial image into a hand-held steering wheel action identification model so that the hand-held steering wheel action identification model identifies the initial image for hand-held steering wheel action, the hand-held steering wheel action identification model is obtained by training the initial model by using different images as training sets in advance, and the different images are marked with corresponding hand-held steering wheel action identification results;
and the extraction unit is used for extracting the walking steering wheel action recognition result output by the walking steering wheel action recognition model.
9. A device for processing a monitoring image of a vehicle, comprising a processor and a memory, in which a computer program is stored, the processor executing the method according to any one of claims 1 to 7 when it calls the computer program in the memory.
10. A computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 7.
CN202111028758.2A 2021-09-02 2021-09-02 Method, device and equipment for processing monitoring image of vehicle Active CN113947747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111028758.2A CN113947747B (en) 2021-09-02 2021-09-02 Method, device and equipment for processing monitoring image of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111028758.2A CN113947747B (en) 2021-09-02 2021-09-02 Method, device and equipment for processing monitoring image of vehicle

Publications (2)

Publication Number Publication Date
CN113947747A true CN113947747A (en) 2022-01-18
CN113947747B CN113947747B (en) 2022-08-26

Family

ID=79327781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111028758.2A Active CN113947747B (en) 2021-09-02 2021-09-02 Method, device and equipment for processing monitoring image of vehicle

Country Status (1)

Country Link
CN (1) CN113947747B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522858A (en) * 2018-11-26 2019-03-26 Oppo广东移动通信有限公司 Plant disease detection method, device and terminal device
CN110341713A (en) * 2019-07-12 2019-10-18 东南(福建)汽车工业有限公司 A kind of driver's holding steering wheel monitoring system and method based on camera
CN110852233A (en) * 2019-11-05 2020-02-28 上海眼控科技股份有限公司 Hand-off steering wheel detection and training method, terminal, device, medium, and system
CN112937445A (en) * 2021-03-25 2021-06-11 深圳安智物联科技有限公司 360-degree vehicle safety auxiliary method and vehicle-mounted system
CN113139403A (en) * 2020-01-17 2021-07-20 顺丰科技有限公司 Violation behavior identification method and device, computer equipment and storage medium
US20210225040A1 (en) * 2020-01-16 2021-07-22 Samsung Electronics Co., Ltd. Image processing apparatus and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522858A (en) * 2018-11-26 2019-03-26 Oppo广东移动通信有限公司 Plant disease detection method, device and terminal device
CN110341713A (en) * 2019-07-12 2019-10-18 东南(福建)汽车工业有限公司 A kind of driver's holding steering wheel monitoring system and method based on camera
CN110852233A (en) * 2019-11-05 2020-02-28 上海眼控科技股份有限公司 Hand-off steering wheel detection and training method, terminal, device, medium, and system
US20210225040A1 (en) * 2020-01-16 2021-07-22 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN113139403A (en) * 2020-01-17 2021-07-20 顺丰科技有限公司 Violation behavior identification method and device, computer equipment and storage medium
CN112937445A (en) * 2021-03-25 2021-06-11 深圳安智物联科技有限公司 360-degree vehicle safety auxiliary method and vehicle-mounted system

Also Published As

Publication number Publication date
CN113947747B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN107680589B (en) Voice information interaction method, device and equipment
US11055516B2 (en) Behavior prediction method, behavior prediction system, and non-transitory recording medium
CN107944382B (en) Method for tracking target, device and electronic equipment
CN104683692A (en) Continuous shooting method and continuous shooting device
CN109726678B (en) License plate recognition method and related device
US11636712B2 (en) Dynamic gesture recognition method, device and computer-readable storage medium
CN106201624A (en) A kind of recommendation method of application program and terminal
CN106303234A (en) Take pictures processing method and processing device
CN112580660A (en) Image processing method, image processing device, computer equipment and readable storage medium
CN114051116A (en) Video monitoring method, device and system for driving test vehicle
CN109635706B (en) Gesture recognition method, device, storage medium and device based on neural network
CN108289201A (en) Video data handling procedure, device and electronic equipment
CN111338669A (en) Updating method and device for intelligent function in intelligent analysis box
CN110035237A (en) Image processing method, device, storage medium and electronic equipment
CN104901939B (en) Method for broadcasting multimedia file and terminal and server
CN113947747B (en) Method, device and equipment for processing monitoring image of vehicle
CN112307948A (en) Feature fusion method, device and storage medium
CN110363814A (en) A kind of method for processing video frequency, device, electronic device and storage medium
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN114445864A (en) Gesture recognition method and device and storage medium
CN112560685A (en) Facial expression recognition method and device and storage medium
CN109033959B (en) Method and device for adding special effect to face of object
CN107993217B (en) Video data real-time processing method and device and computing equipment
CN111931926A (en) Hardware acceleration system and control method for convolutional neural network CNN
CN112672033A (en) Image processing method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant