CN114913232A - Image processing method, apparatus, device, medium, and product - Google Patents

Image processing method, apparatus, device, medium, and product Download PDF

Info

Publication number
CN114913232A
CN114913232A CN202210654662.5A CN202210654662A CN114913232A CN 114913232 A CN114913232 A CN 114913232A CN 202210654662 A CN202210654662 A CN 202210654662A CN 114913232 A CN114913232 A CN 114913232A
Authority
CN
China
Prior art keywords
image
image processing
processed
crane
processing model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210654662.5A
Other languages
Chinese (zh)
Other versions
CN114913232B (en
Inventor
吴新涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petromentor International Education Beijing Co ltd
Original Assignee
Petromentor International Education Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petromentor International Education Beijing Co ltd filed Critical Petromentor International Education Beijing Co ltd
Priority to CN202210654662.5A priority Critical patent/CN114913232B/en
Publication of CN114913232A publication Critical patent/CN114913232A/en
Application granted granted Critical
Publication of CN114913232B publication Critical patent/CN114913232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Control And Safety Of Cranes (AREA)

Abstract

An embodiment of the application provides an image processing method, an image processing device, an image processing apparatus, a medium and a product, and the image processing method comprises the following steps: acquiring an image to be processed in real time, wherein the image to be processed comprises a crane image; extracting image features in an image to be processed based on a first network of an image processing model, and determining position information of a first boundary frame based on the image features, wherein the image framed by the first boundary frame comprises a crane landing leg image; and carrying out classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence coefficient, wherein the target confidence coefficient is used for representing the probability of adding the base plate at the crane support leg. According to the embodiment of the application, whether the base plate is added at the supporting leg of the crane or not can be judged in time, so that the safety of crane operation is ensured.

Description

Image processing method, apparatus, device, medium, and product
Technical Field
The present application relates to image processing technologies, and in particular, to an image processing method, apparatus, device, medium, and product.
Background
In real life, a base plate is generally added at a crane support leg, so that the pressure on the ground brought by the weight of a vehicle is dispersed by enlarging the contact area between the support leg and the ground, and the operation safety of the crane is further ensured. At present, when a crane works, a worker checks whether a support leg of the crane is provided with a base plate according to a manually specified operation process, so that whether the base plate is provided at the support leg of the crane cannot be timely judged, and the safety of the crane work cannot be ensured.
Disclosure of Invention
The embodiment of the application provides an image processing method, device, equipment, medium and product, which can timely judge whether a base plate is added at a crane supporting leg so as to ensure the safety of crane operation.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes: acquiring images to be processed in real time, wherein the images to be processed comprise crane images;
extracting image features in an image to be processed based on a first network of an image processing model, and determining position information of a first boundary frame based on the image features, wherein the image framed by the first boundary frame comprises a crane landing leg image;
and carrying out classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence coefficient, wherein the target confidence coefficient is used for representing the probability of adding the base plate at the crane support leg.
In an optional implementation manner of the first aspect, the performing classification calculation on the position information of the first bounding box through a second network of the image processing model to obtain a target confidence includes:
cutting an image to be processed based on the position information of the first boundary frame to obtain an image of a crane support leg;
and carrying out classification calculation on the crane leg images through a second network of the image processing model to obtain a target confidence coefficient.
In an optional implementation of the first aspect, the method further comprises:
and sending the image to be processed to the alarm platform for generating alarm information by the alarm platform under the condition that the target confidence is greater than a preset threshold.
In an optional implementation manner of the first aspect, before the extracting, by the first network based on the image processing model, the image feature in the image to be processed, and determining the first processing result based on the image feature, the method further includes:
acquiring a training sample set, wherein the training sample set comprises a plurality of image samples to be processed and a label confidence coefficient corresponding to each image sample to be processed;
and training a preset image processing model by using the to-be-processed image samples in the training sample set and the label confidence corresponding to each to-be-processed image sample to obtain the trained image processing model.
In an optional implementation manner of the first aspect, training a preset image processing model by using to-be-processed image samples in a training sample set to obtain a trained image processing model includes:
extracting reference image features in an image sample to be processed based on a first network of a preset image processing model, and determining position information of a first reference boundary frame based on the reference image features, wherein the image sample framed by the first reference boundary frame comprises a crane landing leg image sample;
classifying and calculating the position information of the first reference bounding box through a second network of a preset image processing model to obtain a reference confidence coefficient; the reference confidence coefficient is used for representing the probability of adding the base plate at the crane support leg;
determining a loss function value of a preset image processing model according to the reference confidence coefficient of the target image sample to be processed and the label confidence coefficient of the target image sample to be processed, wherein the target image sample to be processed is any one of the image samples to be processed;
and training the preset image processing model by using the image sample to be processed based on the loss function value of the preset image processing model to obtain the trained image processing model.
In an optional implementation of the first aspect, before obtaining the training sample set, the method further comprises:
acquiring a plurality of original images, wherein the original images comprise crane images;
and according to a preset data enhancement strategy, performing data enhancement processing on the plurality of original images to obtain a plurality of to-be-processed image samples corresponding to each original image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring images to be processed in real time, wherein the images to be processed comprise crane images;
the determining module is used for extracting image features in the image to be processed based on a first network of the image processing model and determining position information of a first boundary frame based on the image features, wherein the image framed by the first boundary frame comprises a crane landing leg image;
and the classification calculation module is used for performing classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence coefficient, wherein the target confidence coefficient is used for representing the probability of adding the base plate at the crane support leg.
In a third aspect, an electronic device is provided, including: a memory for storing computer program instructions; a processor, configured to read and execute the computer program instructions stored in the memory, so as to execute the image processing method provided in any optional implementation manner of the first aspect.
In a fourth aspect, a computer storage medium is provided, on which computer program instructions are stored, and the computer program instructions, when executed by a processor, implement the image processing method provided in any optional implementation manner of the first aspect.
In a fifth aspect, a computer program product is provided, and instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to execute an image processing method provided in any optional implementation manner of the first aspect.
In the embodiment of the application, after an image to be processed including a crane image is obtained in real time, image features in the image to be processed are extracted based on a first network of an image processing model, position information of a first boundary frame is determined based on the extracted image features, the image framed by the first boundary frame includes the crane leg image, and then the position information of the first boundary frame is classified and calculated through a second network of the image processing model, so that a target confidence coefficient can be obtained, and the target confidence coefficient can be used for representing the probability of adding a base plate at the crane leg. Therefore, the images to be processed are acquired in real time and processed to obtain the probability of adding the base plate at the crane support leg, and whether the base plate is added at the crane support leg can be judged in time so as to ensure the safety of crane operation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a training flow of an image processing model in an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a training process of an image processing model in another image processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a feature pyramid provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a pyramid of path aggregation features according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In order to solve the problem that the safety of crane operation cannot be guaranteed due to the fact that whether a base plate is added at a crane support leg or not cannot be judged in time in the prior art, the embodiment of the application provides an image processing method, a device, equipment, a medium and a product. Therefore, the images to be processed are acquired in real time and processed to obtain the probability of adding the base plate at the crane support leg, and whether the base plate is added at the crane support leg can be judged in time so as to ensure the safety of crane operation.
In the image processing method provided by the embodiment of the application, the execution subject may be an image processing apparatus or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing method executed by an image processing apparatus is taken as an example to describe an image processing scheme provided in the embodiment of the present application.
In addition, in the image processing method provided in the embodiment of the present application, it is necessary to process the image to be processed by using the image processing model trained in advance, and therefore, before performing image processing by using the image processing model, the image processing model needs to be trained. Therefore, a specific implementation of the training method for an image processing model provided in the embodiments of the present application is described below with reference to the drawings.
An execution subject of the method is an image processing device, and the method can be specifically realized by the following steps:
firstly, acquiring a training sample set
The training sample set may include a plurality of to-be-processed image samples and a corresponding label confidence for each to-be-processed image sample. Wherein, crane image samples can be included in each image sample to be processed, and the label confidence can be used for representing the probability of padding at the crane legs.
In order to obtain a more accurate training sample set and further better train the image processing model, in a specific embodiment, as shown in fig. 1, the obtaining of the training sample set may specifically include the following steps:
s110, a plurality of to-be-processed image samples are obtained.
Specifically, the image processing apparatus may directly obtain a plurality of to-be-processed image samples within a preset time period through the monitoring device. The preset time period may be a time period preset based on actual experience or needs, and may be one month or three months, which is not limited specifically herein.
Specifically, the image processing apparatus may obtain a crane-containing image sample from a video every N frames in a grouped frame-extracting manner by using a crane operation video of an operation site within a preset time period obtained by a monitoring device, where N is a positive integer, and the monitoring device may be a monitoring device installed on a telegraph pole or a street lamp on a construction site, and may be configured to obtain the crane image sample in the construction site operation. In addition, the horizontal distance between the monitoring device and the crane can be controlled within a preset distance, for example, within 100 meters, which is not limited herein.
And S120, labeling label confidence degrees corresponding to the image samples to be processed one by one.
Specifically, the label confidence of each to-be-processed image sample may be labeled by a manual labeling method, or the label confidence of each to-be-processed image sample may be directly labeled by the image processing apparatus, and the specific labeling method is not particularly limited herein.
It should be noted that, in the process of labeling, 75% of labeled sample data may be used as a training sample, 25% of labeled sample data may be used as a test sample, and the specific distribution ratio of the training sample and the test sample is not limited herein.
It should be noted that, since the image processing model needs to perform iterative processing for multiple times to adjust the loss function value thereof until the loss function value satisfies the training stop condition, the trained image processing apparatus is obtained. However, in each iterative training, if only one to-be-processed image sample is input, the amount of the sample is too small to facilitate the training adjustment of the image processing model. Therefore, the training sample set needs to be divided into a plurality of to-be-processed image samples, so that the to-be-processed image samples in the training sample set can be used for performing iterative processing on the image processing model.
Therefore, label confidence degrees corresponding to the image samples to be processed one by one can be obtained by annotating the acquired image samples to be processed, and a training sample set containing the image samples to be processed can be further obtained. Therefore, the training of the subsequent model is facilitated.
And secondly, training a preset image processing model by using the image samples to be processed in the training sample set and the label confidence corresponding to each image sample to be processed to obtain the trained image processing model.
As shown in fig. 2, the step may specifically include the following steps:
s210, extracting reference image features in the image sample to be processed based on a first network of a preset image processing model, and determining position information of a first reference boundary box based on the reference image features.
The first network may be a YOLOv5 network, and the specific use of the first network is not limited herein. In addition, the YOLOv5 network model in the embodiment of the present application may adopt 8-fold, 16-fold, or 32-fold downsampling feature maps. The reference image features are image features at crane legs included in the image sample to be processed. The image samples framed by the first reference bounding box may comprise crane leg image samples. The position information of the first reference bounding box may be position coordinates of a top-left pixel vertex and a bottom-right pixel vertex of the bounding box in the image to be processed, which is not limited herein.
Specifically, after the training sample set is obtained, the image processing apparatus may input the to-be-processed image samples in the training sample set to a first network in a preset image processing model, extract reference image features in the to-be-processed image samples based on the first network, and then determine the position information of the first reference bounding box based on the reference image features.
In one example, after acquiring the training sample set, the image processing apparatus may input the image samples to be processed in the training sample set into a first network of a preset image processing model, where the first network may extract reference image features of the image samples to be processed and process the reference image features to input position information of a first reference bounding box with a reference confidence greater than a first preset threshold. The first preset threshold may be based on actual experience or a threshold that needs to be preset, and may be, for example, 0.4, which is not limited herein.
Based on this, it should be noted that the feature pyramid is a method commonly used in the field of target detection, and a specific structure of the feature pyramid may be as shown in fig. 3, where the feature semantic information of the lower layer in the structure is less, but the target position is accurate; the feature semantic information of the high layer is rich, but the target position is rough. The feature pyramid can fuse features of different sizes and be performed independently on different feature layers for different sized objects.
On this basis, the method and the device use a pyramid structure of the path aggregation features, and the specific structure can be as shown in fig. 4, so that the multi-scale of the image features is realized, and the detection performance of the target detection, especially the detection performance of the small target, is further improved. Firstly, introducing bottom-up path enhancement, shortening an information propagation path, and simultaneously utilizing accurate positioning information of low-layer features; secondly, dynamic feature pooling is used, and features of all layers of a pyramid are utilized in each proposed area to avoid random distribution of the proposed areas; and thirdly, the full-connection layer fusion is used, so that the capture capacity of the model to information with different scales is improved.
And S220, performing classification calculation on the position information of the first reference bounding box through a second network of the preset image processing model to obtain a reference confidence coefficient.
The second network may be MobileNetV2, or other networks that can be used for classification calculation may be used, and is not limited in particular. The reference confidence is used for representing the probability of adding the base plate at the crane leg.
Specifically, since the image samples framed by the first reference bounding box include crane leg image samples, the image processing apparatus may perform a classification calculation on the position information of the first reference bounding box through a second network of the preset image processing model after obtaining the position information of the first reference bounding box, so as to obtain the reference confidence.
It should be noted that, due to the high background information interference and class similarity, the first network has a certain short board for the classification performance of the target, so that certain false reports and false reports are correspondingly generated. So in view of the lack of classification capability in the first network, the present application proposes to use an additional classifier, namely MobileNetV2, to improve classification performance.
Specifically, in the training stage, a target region is cut from an image sample to be processed according to a truth value label, wherein the truth value can be the label of a training image and is an ideal result output by a model, and the first reference bounding box is transformed to the size of 224 multiplied by 224 in a zero filling mode and is sent to the MobileNetV2 for training; in the inference stage, the first reference bounding box is cut from the image sample to be processed according to the position information of the first reference bounding box, and the first reference bounding box is transformed to the size of 224 × 224 in a zero padding way, so as to obtain the target confidence. The position information involved may be the pixel position of the first reference bounding box in the image sample to be processed.
And S230, determining a loss function value of a preset image processing model according to the reference confidence coefficient of the target image sample to be processed and the label confidence coefficient of the target image sample to be processed.
Wherein the target image sample to be processed is any one of the image samples to be processed.
Specifically, the image processing device may obtain a reference confidence based on any one of the multiple image samples to be processed, and further accurately determine a loss function value of the preset image processing device according to the tag confidence corresponding to the image sample to be processed, so as to facilitate iterative training of the preset image processing model based on the loss function value, and further obtain a more accurate image processing model.
S240, training the preset image processing model by using the image sample to be processed based on the loss function value of the preset image processing model to obtain the trained image processing model.
Specifically, in order to obtain a better trained image processing model, under the condition that the loss function value does not meet the training stop condition, the model parameters of the preset image processing model are adjusted, and the image processing model after parameter adjustment is continuously trained by using the image sample to be processed until the loss function value meets the training stop condition, so that the trained image processing model is obtained.
In this embodiment, the image processing apparatus may extract the reference image features in the image samples to be processed by inputting each image sample to be processed in the training sample set into the first network of the preset image processing model, and determine the position information of the first reference bounding box based on the reference image features, where the image samples framed by the first reference bounding box include crane leg image samples. After the position information of the first reference boundary frame is determined, a reference confidence coefficient used for representing the probability of adding a base plate at the crane supporting leg position can be obtained based on the position information of the first reference boundary frame by a second network of a preset image processing model, then a loss function value can be determined according to the label confidence coefficient corresponding to each image sample to be processed, and then the preset image processing model can be trained by the image sample to be processed based on the loss function value until the loss function value meets the training stop condition, so that a more accurate image processing model can be obtained.
The problem of low resolution exists because the area of the crane leg is too small compared with the whole crane in the image to be processed, and the occupation ratio of the crane leg pixels in the first boundary frame is relatively small, and meanwhile, the shielding of other interferents to the leg also exists. In addition, the two types of target heights of the 'supporting leg with a base plate' and the 'supporting leg without a base plate' are similar, and the difference mainly appears in fine-grained detail characteristics, so that the output confidence is not accurate enough.
Based on this, in an embodiment, the above-mentioned S220 may specifically include the following steps:
cutting an image sample to be processed based on the position information of the first reference boundary frame to obtain a crane support leg image sample;
and carrying out classification calculation on the crane support leg image samples through a second network of a preset image processing model to obtain a reference confidence coefficient.
In this embodiment, the image processing apparatus may crop the image sample to be processed based on the position information of the first reference bounding box after acquiring the position information of the first reference bounding box, and may acquire the crane leg image sample because the image sample framed by the first reference bounding box includes the crane leg image sample, and further may perform classification calculation on the crane leg image sample through a second network of the preset image processing model to obtain the reference confidence. Therefore, the classification accuracy of the preset image processing model can be improved, and the more accurate image processing model can be obtained conveniently.
In addition, since the image features of the acquired to-be-processed image sample vary greatly with the weather influence, and a single image with the image features is not beneficial to the training of the model, in order to make the acquired to-be-processed image have diversity, so as to make the trained model more robust to the conditions such as weather illumination, in an embodiment, before acquiring the training sample set, the above-mentioned related image processing method may further include:
acquiring a plurality of original images;
and according to a preset data enhancement strategy, performing data enhancement processing on the plurality of original images to obtain a plurality of to-be-processed image samples corresponding to each original image.
Specifically, the image processing apparatus may obtain a plurality of original images before obtaining the training sample set, and perform data enhancement processing on each original image according to a preset data enhancement policy to obtain a plurality of to-be-processed image samples corresponding to each original image.
Wherein, the original image comprises a crane image. The preset data enhancement strategy can be a preset strategy based on actual needs or experience, and is used for enhancing the image. And each preset data enhancement strategy can comprise two data enhancement operations, and each data enhancement operation can comprise the probability of using the operation and the magnitude related to the operation, so that the optimal combination of the data enhancement strategies can be obtained by using reinforcement learning in the search space of the data enhancement strategies formed by the operation. Note that, the probability of 0 or the intensity of 0 indicates that the enhancement operation is not used.
In addition, it should be noted that the preset data enhancement policy referred in the embodiment of the present application may be five data enhancement policies, namely, translate x _ BBox and equal size, translate Y _ Only _ BBoxes and equal size, translate x _ BBoxes and equal size, translate Y _ Only _ BBoxes, Rotate _ BBoxes and equal size, selected from enhancement processing operations of shear x/Y, TranslateX/Y, Rotate, autocontrost, Invert, equal size, shape, etc.
Wherein, TranslateX _ BBox: and translating the truth value labeling box and the original image. Equalize: histogram equalization is performed for each channel. TranslateY _ Only _ BBodes: and carrying out random translation on the truth value labeling box. Cutout: and deleting a part of rectangular areas in the image. Sharpness: and carrying out image sharpening. ShearX _ Bbox: and carrying out cross cutting on the image and the truth value box. Rotate _ BBox: the image and the truth box are rotated. Color: the image is color transformed.
Specific examples can be shown in table 1:
TABLE 1 data enhancement strategy
Figure BDA0003688862940000111
Besides, the number of samples of each original image can be increased by copying.
In this embodiment, the image processing apparatus may acquire a plurality of original images before acquiring the training sample set, and perform data enhancement processing on each of the plurality of original images according to a preset data enhancement policy to obtain a plurality of to-be-processed image samples corresponding to each of the plurality of original images. Therefore, the situation that the image features are single is avoided, and then a large number of to-be-processed image samples can be obtained, so that a more accurate image processing model can be trained conveniently.
Based on the image processing model obtained through training in the foregoing embodiment, the embodiment of the present application further provides a specific implementation of an image processing method, which is specifically described in detail with reference to fig. 5.
And S510, acquiring the image to be processed in real time.
The image processing device can acquire a frame of image to be processed every N frames in a real-time monitoring video acquired by monitoring equipment installed in an operation field. Wherein the image to be processed comprises a crane image. The monitoring equipment can be a telegraph pole installed on a construction site or a monitoring equipment on a street lamp and can be used for acquiring crane images in construction site operation. In addition, the horizontal distance between the monitoring device and the crane can be controlled within a preset distance, for example, within 100 meters, which is not limited herein. The image to be processed may comprise a crane image.
S520, extracting image features in the image to be processed based on the first network of the image processing model, and determining the position information of the first boundary frame based on the image features.
The first network may be a YOLOv5 network, and the specific use of the first network is not limited herein. The image feature may be an image feature at a crane leg included in the image to be processed. The image framed by the first bounding box comprises a crane leg image. The position information of the first bounding box may be position coordinates of a top-left pixel vertex and a bottom-right pixel vertex of the bounding box in the image to be processed, which is not limited herein.
Specifically, the image processing apparatus may input the image to be processed acquired in real time into the first network of the image processing model, so as to extract image features in the image to be processed based on the first network of the image processing model, and may further determine the position information of the first bounding box based on the image features.
In one example, the image processing apparatus may, in inputting an image to be processed acquired in real time into a first network of the image processing model, extract an image feature of the image to be processed through the first network, and process the image feature to output position information of a first bounding box whose confidence is greater than a first preset threshold. The first preset threshold may be a threshold based on practical experience or a threshold that needs to be preset, and is not limited in detail herein.
S530, carrying out classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence coefficient.
The second network may be MobileNetV2, or other networks that can be used for classification calculation may be used, and is not limited in particular. The target confidence is used to characterize the probability of shimming at the crane legs.
Specifically, since the image framed by the first reference bounding box includes the crane leg image, the image processing apparatus may perform a classification calculation on the position information of the first bounding box through the second network of the image processing model after obtaining the position information of the first bounding box, so as to obtain the target confidence.
In the embodiment of the application, after an image to be processed including a crane image is obtained in real time, image features in the image to be processed are extracted based on a first network of an image processing model, position information of a first boundary frame is determined based on the extracted image features, the image framed by the first boundary frame includes the crane leg image, and then the position information of the first boundary frame is classified and calculated through a second network of the image processing model, so that a target confidence coefficient can be obtained, and the target confidence coefficient can be used for representing the probability of adding a base plate at the crane leg. Therefore, the images to be processed are acquired in real time and processed to obtain the probability of adding the base plate at the crane support leg, and whether the base plate is added at the crane support leg can be judged in time so as to ensure the safety of crane operation.
In order to obtain the target confidence more accurately, in an embodiment, the above-mentioned step S530 may specifically include the following steps:
cutting an image to be processed based on the position information of the first boundary frame to obtain an image of a crane support leg;
and carrying out classification calculation on the crane support leg images through a second network of the image processing model to obtain a target confidence coefficient.
In this embodiment, the image processing device may cut an image to be processed based on the acquired position information of the first bounding box to acquire a crane leg image, and may perform classification calculation on the crane leg image through a second network card of the image processing model to obtain a probability for characterizing a padding plate at the crane leg. Therefore, the images to be processed can be cut, the cut images are processed through the image processing model, the target confidence coefficient can be accurately obtained, the base plate can be added at the supporting leg of the crane in time, and the safety of crane operation can be guaranteed.
In order to more accurately and fully describe the image processing method provided by the embodiment of the present application, in an embodiment, the image processing method mentioned above may further include the following steps:
and sending the image to be processed to the alarm platform for generating alarm information by the alarm platform under the condition that the target confidence is greater than a preset threshold.
The preset threshold may be based on actual experience or a confidence threshold that needs to be preset.
Specifically, the image processing device can send the image to be processed to the alarm platform under the condition that the target confidence is greater than the preset threshold value, so that the alarm platform can generate alarm information, people can be reminded in time, the crane supporting legs can be adjusted subsequently, and the safety of crane operation is further guaranteed.
TABLE 2 statistical table of test results
Type of algorithm Number of test scenarios Video frame number Correct recognition Error recognition Accuracy rate
Crane supporting leg without adding backing plate 2 6320 5963 357 94.3%
Crane supporting leg without adding backing plate 2 6542 6171 371 94.3%
Crane supporting leg without adding backing plate 1 5324 5003 321 93.9%
Crane supporting leg without additional backing plate 1 4890 4643 247 94.9%
The image processing method provided by the embodiment of the application can be used for detecting whether the base plate is additionally arranged on the crane in operation in real time, not only can the management requirements be met, but also the potential safety hazard can be timely sent out, and the occurrence of accidents is avoided. Based on the method, the collected construction site scene data set shot by the user can be adopted for training and testing under the conditions of an Inter Core i7 CPU, a 4G memory and an NVIDIA GeForce 2080Ti independent display card. The training of the model is loaded in a weight file after ImageNet pre-training and iterated for 70 times, wherein the weight file comprises 1-3 test scenes in the morning, noon, afternoon and evening, enough image tests are obtained, and the specific results are shown in table 2.
Based on the same inventive concept, the embodiment of the application also provides an image processing device. The image processing apparatus provided in the embodiment of the present application is specifically described with reference to fig. 6.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 6, the image processing apparatus 600 may include: an acquisition module 610, a determination module 620, and a classification calculation module 630.
The acquiring module 610 is configured to acquire an image to be processed in real time, where the image to be processed includes a crane image.
The determining module 620 is configured to extract image features in the image to be processed based on the first network of the image processing model, and determine position information of a first bounding box based on the image features, where the image framed by the first bounding box includes a crane leg image.
And the classification calculation module 630 is configured to perform classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence, where the target confidence is used to represent the probability of adding the base plate at the crane support leg.
In an embodiment, the above mentioned acquiring module is further configured to crop the image to be processed based on the position information of the first bounding box, and acquire the crane leg image.
The related classification calculation module is also used for performing classification calculation on the crane support leg images through a second network of the image processing model to obtain a target confidence coefficient.
In one embodiment, the image processing apparatus referred to above may include a transmission module.
And the sending module is used for sending the image to be processed to the alarm platform under the condition that the target confidence coefficient is greater than a preset threshold value so as to generate alarm information by the alarm platform.
In an embodiment, the obtaining module is further configured to, before the first network based on the image processing model extracts image features in the image to be processed and determines a first processing result based on the image features, obtain a training sample set, where the training sample set includes a plurality of image samples to be processed and a tag confidence corresponding to each of the image samples to be processed.
The image processing apparatus referred to above may further include a training module.
And the training module is used for training a preset image processing model by using the to-be-processed image samples in the training sample set and the label confidence corresponding to each to-be-processed image sample to obtain the trained image processing model.
In one embodiment, the training module may be specifically configured to:
extracting reference image features in an image sample to be processed based on a first network of a preset image processing model, and determining position information of a first reference boundary frame based on the reference image features, wherein the image sample framed by the first reference boundary frame comprises a crane landing leg image sample;
classifying and calculating the position information of the first reference bounding box through a second network of a preset image processing model to obtain a reference confidence coefficient; the reference confidence coefficient is used for representing the probability of adding the base plate at the crane support leg;
determining a loss function value of a preset image processing model according to the reference confidence coefficient of the target image sample to be processed and the label confidence coefficient of the target image sample to be processed, wherein the target image sample to be processed is any one of the image samples to be processed;
and training the preset image processing model by using the image sample to be processed based on the loss function value of the preset image processing model to obtain the trained image processing model.
In one embodiment, the acquiring module is further configured to acquire a plurality of raw images before acquiring the training sample set, wherein the raw images include crane images.
The image processing apparatus referred to above may comprise a data enhancement module.
And the data enhancement module is used for performing data enhancement processing on the plurality of original images according to a preset data enhancement strategy so as to obtain a plurality of to-be-processed image samples corresponding to each original image.
In the embodiment of the application, after an image to be processed including a crane image is obtained in real time, image features in the image to be processed are extracted based on a first network of an image processing model, position information of a first boundary frame is determined based on the extracted image features, the image framed by the first boundary frame includes the crane leg image, and then the position information of the first boundary frame is classified and calculated through a second network of the image processing model, so that a target confidence coefficient can be obtained, and the target confidence coefficient can be used for representing the probability of adding a base plate at the crane leg. Therefore, the images to be processed are acquired in real time and processed to obtain the probability of adding the base plate at the crane support leg, and whether the base plate is added at the crane support leg can be judged in time so as to ensure the safety of crane operation. .
Each module in the image processing apparatus provided in the embodiment of the present application may implement the method steps in the embodiments shown in fig. 1, fig. 2, or fig. 5, and achieve the corresponding technical effects, and for brevity, no further description is given here.
Fig. 5 shows a hardware structure diagram of an electronic device according to an embodiment of the present application.
The electronic device may comprise a processor 501 and a memory 502 in which computer program instructions are stored.
Specifically, the processor 501 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 502 may include a mass storage for data or instructions. By way of example, and not limitation, memory 502 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 502 may include removable or non-removable (or fixed) media, where appropriate. The memory 502 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 502 is non-volatile solid-state memory.
The memory may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to the methods according to an aspect of the present disclosure.
The processor 501 reads and executes the computer program instructions stored in the memory 502 to implement any one of the image processing methods in the above-described embodiments.
In one example, the electronic device can also include a communication interface 503 and a bus 510. As shown in fig. 5, the processor 501, the memory 502, and the communication interface 503 are connected via a bus 510 to complete communication therebetween.
The communication interface 503 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present application.
Bus 510 comprises hardware, software, or both to couple the components of the online data traffic billing device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 510 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the image processing method in the foregoing embodiments, the embodiments of the present application may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions realize the image processing method provided by the embodiment of the application when being executed by a processor.
The embodiment of the present application further provides a computer program product, and when an instruction in the computer program product is executed by a processor of an electronic device, the electronic device executes the scientific and technological innovation achievement evaluation method provided in the embodiment of the present application.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable image processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As will be apparent to those skilled in the art, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed in real time, wherein the image to be processed comprises a crane image;
extracting image features in the image to be processed based on a first network of an image processing model, and determining position information of a first boundary frame based on the image features, wherein the image framed by the first boundary frame comprises a crane support leg image;
and carrying out classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence coefficient, wherein the target confidence coefficient is used for representing the probability of adding a base plate at the crane support leg.
2. The method of claim 1, wherein the classifying the position information of the first bounding box through the second network of image processing models to obtain a target confidence comprises:
cutting the image to be processed based on the position information of the first boundary frame to obtain an image of a crane support leg;
and carrying out classification calculation on the crane support leg images through a second network of the image processing model to obtain a target confidence coefficient.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and sending the image to be processed to an alarm platform for generating alarm information by the alarm platform under the condition that the target confidence is greater than a preset threshold.
4. The method of claim 1, wherein before the first network based on the image processing model extracts image features in the image to be processed and determines the first processing result based on the image features, the method further comprises:
acquiring a training sample set, wherein the training sample set comprises a plurality of image samples to be processed and a label confidence corresponding to each image sample to be processed;
and training a preset image processing model by using the image samples to be processed in the training sample set and the label confidence corresponding to each image sample to be processed to obtain the trained image processing model.
5. The method according to claim 4, wherein the training a preset image processing model by using the to-be-processed image samples in the training sample set to obtain a trained image processing model comprises:
extracting reference image features in the image sample to be processed based on a first network of a preset image processing model, and determining position information of a first reference boundary frame based on the reference image features, wherein the image sample framed by the first reference boundary frame comprises a crane landing leg image sample;
classifying and calculating the position information of the first reference bounding box through a second network of a preset image processing model to obtain a reference confidence coefficient; the reference confidence coefficient is used for representing the probability of adding the base plate at the crane leg;
determining a loss function value of a preset image processing model according to a reference confidence coefficient of a target image sample to be processed and a label confidence coefficient of the target image sample to be processed, wherein the target image sample to be processed is any one of the image samples to be processed;
and training the preset image processing model by using the image sample to be processed based on the loss function value of the preset image processing model to obtain the trained image processing model.
6. The method of claim 4 or 5, wherein prior to obtaining the training sample set, the method further comprises:
acquiring a plurality of original images, wherein the original images comprise crane images;
and according to a preset data enhancement strategy, performing data enhancement processing on the plurality of original images to obtain a plurality of to-be-processed image samples corresponding to each original image.
7. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring images to be processed in real time, and the images to be processed comprise crane images;
the determining module is used for extracting image features in the image to be processed based on a first network of an image processing model and determining position information of a first boundary frame based on the image features, wherein the image framed by the first boundary frame comprises a crane support leg image;
and the classification calculation module is used for performing classification calculation on the position information of the first boundary frame through a second network of the image processing model to obtain a target confidence coefficient, wherein the target confidence coefficient is used for representing the probability of adding the base plate at the crane support leg.
8. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions;
the processor reads and executes the computer program instructions to implement the image processing method of any one of claims 1 to 6.
9. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement the image processing method of any one of claims 1 to 6.
10. A computer program product, wherein instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the image processing method of any one of claims 1-6.
CN202210654662.5A 2022-06-10 2022-06-10 Image processing method, device, equipment, medium and product Active CN114913232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210654662.5A CN114913232B (en) 2022-06-10 2022-06-10 Image processing method, device, equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210654662.5A CN114913232B (en) 2022-06-10 2022-06-10 Image processing method, device, equipment, medium and product

Publications (2)

Publication Number Publication Date
CN114913232A true CN114913232A (en) 2022-08-16
CN114913232B CN114913232B (en) 2023-08-08

Family

ID=82769921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210654662.5A Active CN114913232B (en) 2022-06-10 2022-06-10 Image processing method, device, equipment, medium and product

Country Status (1)

Country Link
CN (1) CN114913232B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118893A1 (en) * 1997-07-07 2002-08-29 Van-Duc Nguyen Method and apparatus for image registration
CN113591967A (en) * 2021-07-27 2021-11-02 南京旭锐软件科技有限公司 Image processing method, device and equipment and computer storage medium
CN114170136A (en) * 2021-11-04 2022-03-11 广州大学 Method, system, device and medium for detecting defects of fasteners of contact net bracket device
CN114299304A (en) * 2021-12-15 2022-04-08 腾讯科技(深圳)有限公司 Image processing method and related equipment
CN114445610A (en) * 2021-12-27 2022-05-06 湖南中联重科应急装备有限公司 Landing leg safety detection method, processor and device for fire fighting truck and fire fighting truck
CN114529543A (en) * 2022-04-20 2022-05-24 清华大学 Installation detection method and device for peripheral screw gasket of aero-engine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118893A1 (en) * 1997-07-07 2002-08-29 Van-Duc Nguyen Method and apparatus for image registration
CN113591967A (en) * 2021-07-27 2021-11-02 南京旭锐软件科技有限公司 Image processing method, device and equipment and computer storage medium
CN114170136A (en) * 2021-11-04 2022-03-11 广州大学 Method, system, device and medium for detecting defects of fasteners of contact net bracket device
CN114299304A (en) * 2021-12-15 2022-04-08 腾讯科技(深圳)有限公司 Image processing method and related equipment
CN114445610A (en) * 2021-12-27 2022-05-06 湖南中联重科应急装备有限公司 Landing leg safety detection method, processor and device for fire fighting truck and fire fighting truck
CN114529543A (en) * 2022-04-20 2022-05-24 清华大学 Installation detection method and device for peripheral screw gasket of aero-engine

Also Published As

Publication number Publication date
CN114913232B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN110569899A (en) Dam face defect classification model training method and device
WO2019051941A1 (en) Method, apparatus and device for identifying vehicle type, and computer-readable storage medium
CN110222764A (en) Shelter target detection method, system, equipment and storage medium
EP4231229A1 (en) Industrial defect recognition method and system, and computing device and storage medium
CN115880536B (en) Data processing method, training method, target object detection method and device
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN114140683A (en) Aerial image target detection method, equipment and medium
CN112419268A (en) Method, device, equipment and medium for detecting image defects of power transmission line
CN110895811B (en) Image tampering detection method and device
CN111815576A (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
CN114913233A (en) Image processing method, apparatus, device, medium, and product
CN112861567B (en) Vehicle type classification method and device
CN115063739B (en) Abnormal behavior detection method, device, equipment and computer storage medium
CN114913232B (en) Image processing method, device, equipment, medium and product
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN115797314A (en) Part surface defect detection method, system, equipment and storage medium
CN114495108A (en) Character detection method and device, electronic equipment and readable medium
CN115424250A (en) License plate recognition method and device
CN111950354A (en) Seal home country identification method and device and electronic equipment
CN113516161B (en) Risk early warning method for tunnel constructors
CN115080081A (en) Method, system, equipment and storage medium for automatically updating automatic driving function verification scene library
CN117808762A (en) Intelligent grading method and device for grain size, electronic equipment and storage medium
CN116052192A (en) Document image quality evaluation method, device, electronic equipment and storage medium
Madake et al. Vision-Based Weather Condition Recognition for Driver Assistance
CN117994772A (en) License plate recognition method, license plate recognition model training method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 1707, Building 2, East Ring Road, Yanqingyuan, Zhongguancun, Yanqing District, Beijing, 102199

Applicant after: Jiayang Smart Security Technology (Beijing) Co.,Ltd.

Address before: Room 1707, Building 2, East Ring Road, Yanqingyuan, Zhongguancun, Yanqing District, Beijing, 102199

Applicant before: PETROMENTOR INTERNATIONAL EDUCATION (BEIJING) CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant