CN111967450B - Sample acquisition method, training method, device and system for automatic driving model - Google Patents

Sample acquisition method, training method, device and system for automatic driving model Download PDF

Info

Publication number
CN111967450B
CN111967450B CN202011129255.XA CN202011129255A CN111967450B CN 111967450 B CN111967450 B CN 111967450B CN 202011129255 A CN202011129255 A CN 202011129255A CN 111967450 B CN111967450 B CN 111967450B
Authority
CN
China
Prior art keywords
image
combination
road condition
classified
verified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011129255.XA
Other languages
Chinese (zh)
Other versions
CN111967450A (en
Inventor
陈翔
樊潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Joynext Technology Corp
Original Assignee
Ningbo Joynext Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Joynext Technology Corp filed Critical Ningbo Joynext Technology Corp
Priority to CN202011129255.XA priority Critical patent/CN111967450B/en
Publication of CN111967450A publication Critical patent/CN111967450A/en
Application granted granted Critical
Publication of CN111967450B publication Critical patent/CN111967450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a sample acquisition method, a training method, a device and a system for an automatic driving model, and belongs to the technical field of automatic driving. The sample acquisition method comprises the following steps: acquiring a road condition image, wherein the road condition image is shot by vehicle-mounted shooting equipment or road side shooting equipment; classifying the road condition images and determining images to be verified; generating an image combination containing an image to be verified, and generating an image verification code according to the image combination for user verification; and receiving a mark image combination returned by the user after the verification is finished, determining a label of the image to be verified according to the mark image combination, and taking the road condition image of the determined label as a sample. The method and the device for determining the sample label by the classification model solve the problem that some pictures cannot determine the label by the classification model, and can realize timely training of the model by acquiring the road condition image in real time through interaction of the shooting equipment, the vehicle-mounted unit and the road side unit, so that the model meets the driving requirement on complex road conditions.

Description

Sample acquisition method, training method, device and system for automatic driving model
Technical Field
The invention relates to the technical field of automatic driving, in particular to a sample acquisition method, a training method, a device and a system for an automatic driving model.
Background
The application of the neural network model in the automatic driving technology is mainly to output vehicle behavior decisions according to input vehicle running environment data and vehicle condition data. The vehicle behavior decision output according to the vehicle running environment data mainly determines the vehicle behavior decision category according to the road condition image. In the training process of the automatic driving model, a large number of labeled road condition images are needed as samples, the automatic driving model is trained into a model capable of accurately determining vehicle behavior decision according to the road condition images, and labeling of the original road condition images becomes a difficult problem in automatic driving model training due to the fact that the quantity of the labeled road condition images is huge and the quality of the collected original road condition images is different.
In the prior art, the marking of road condition images is mainly carried out in a marking machine or manual crowdsourcing mode, the problems that the marking is inaccurate and cannot be carried out exist in the machine marking, a large amount of manpower is required to be consumed in the manual marking, and the marking efficiency is low, so that the accuracy and the efficiency of an automatic driving model are influenced by the marking method in the prior art.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the invention provides a sample acquisition method, a training method, a device and a system for an automatic driving model. The technical scheme is as follows:
in a first aspect, a method for obtaining a sample for an automatic driving model is provided, the method including:
acquiring a road condition image, wherein the road condition image is shot by vehicle-mounted shooting equipment or road side shooting equipment;
classifying the road condition images and determining images to be verified;
generating an image combination containing the image to be verified, and generating an image verification code according to the image combination for user verification;
and receiving a mark image combination returned after the user completes verification, determining the label of the image to be verified according to the mark image combination, and taking the road condition image with the determined label as a sample.
Further, the classifying the road condition image and determining the image to be verified includes:
detecting a target in the road condition image;
classifying the road condition image according to the target by utilizing a classification model to obtain classification confidence coefficients of the road condition image in various categories;
and comparing the classification confidence with a classification confidence condition, if the classification meeting the classification confidence condition exists, determining the road condition image as a classified image, determining the label of the classified image according to the classification, and if the classification meeting the classification confidence condition does not exist, determining the road condition image as an unclassified image, wherein the unclassified image is the image to be verified.
Further, the generating an image combination including the image to be verified, and generating a picture verification code according to the image combination includes:
sorting the classes corresponding to the unclassified images according to the classification confidence degrees, taking the class with the highest sorting order as a first verification class of the unclassified images, and taking the class meeting the sorting condition as a second verification class of the unclassified images;
combining the unclassified image with the classified images belonging to the first verification category and the classified images belonging to the second verification category according to a first combination rule to generate the image combination;
and generating a picture verification code according to the image combination.
Further, the generating an image combination including the image to be verified, and generating a picture verification code according to the image combination, further includes:
and judging whether the category corresponding to the classified image is a specific category, if so, generating the image combination according to the classified image belonging to the specific category, and generating the picture verification code according to the image combination.
Further, the generating the image combination according to the classified images belonging to the specific category and the generating the picture verification code according to the image combination includes:
determining whether the classified image belonging to the particular class is an unknown precise range image;
if so, determining the unknown precise range image as the image to be verified, generating a first sub-image combination according to the unknown precise range image according to a second combination rule, and if not, generating a second sub-image combination according to the second combination rule;
combining the first sub-image combination and the second sub-image combination to generate the image combination;
and generating the picture verification code according to the image combination.
Further, the determining the label of the image to be verified according to the marked image combination includes:
and calculating the marking confidence of the image to be verified in the marking image combination, and determining a label for the image to be verified meeting the marking confidence condition.
In a second aspect, there is provided an automated driving model training method, the method comprising:
training an automatic driving model by using the sample obtained by the sample obtaining method in any one of the first aspect.
In a third aspect, there is provided a sample acquisition apparatus for an automatic driving model, the apparatus including:
the road condition image acquisition module is used for acquiring a road condition image, and the road condition image is shot by vehicle-mounted shooting equipment or road side shooting equipment;
the classification module is used for classifying the road condition images and determining images to be verified;
the verification code generation module is used for generating an image combination containing the image to be verified, generating an image verification code according to the image combination and verifying a user;
and the label determining module is used for receiving a mark image combination returned by the user after the verification is finished, determining the label of the image to be verified according to the mark image combination, and taking the road condition image with the determined label as a sample.
Further, the classification module includes:
the target detection module is used for detecting a target in the road condition image;
the image classification module is used for classifying the road condition images according to the targets by utilizing the classification model to obtain the classification confidence of the road condition images in each class;
and the classification output module is used for comparing the classification confidence coefficient with the classification confidence coefficient condition, determining the road condition image as a classified image if the classification meeting the classification confidence coefficient condition exists, determining the label of the classified image according to the classification, and determining the road condition image as an unclassified image if the classification meeting the classification confidence coefficient condition does not exist, wherein the unclassified image is an image to be verified.
Further, the verification code generation module includes:
the classification confidence coefficient analysis module is used for sorting the classes corresponding to the unclassified images according to the classification confidence coefficient, taking the class with the highest sorting as a first verification class of the unclassified images, and taking the class meeting the sorting condition as a second verification class of the unclassified images;
the image combination generation module is used for combining the unclassified image, the classified image belonging to the first verification category and the classified image belonging to the second verification category according to a first combination rule to generate an image combination;
and the picture verification code generation module is used for generating a picture verification code according to the image combination.
Further, the verification code generation module is further configured to determine whether a category corresponding to the classified image is a specific category, if so, generate an image combination according to the classified image belonging to the specific category, and generate a picture verification code according to the image combination.
Further, the verification code generation module further includes:
the judging module is used for judging whether the classified images belonging to the specific category are images within an unknown accurate range, and the images within the unknown accurate range are images to be verified;
the image combination generating module is further used for generating a first sub-image combination according to the unknown accurate range image according to a second combination rule when the classified image is the unknown accurate range image, and generating a second sub-image combination according to the known accurate range image according to the second combination rule when the classified image is the known accurate range image;
combining the first sub-image combination and the second sub-image combination to generate the image combination.
Further, a tag determination module, comprising:
the marking confidence coefficient calculation module is used for calculating the marking confidence coefficient of the image to be verified in the marking image combination;
the label determining module is further used for determining labels for the images to be verified meeting the marking confidence degree conditions.
In a fourth aspect, there is provided an automatic driving model training apparatus, comprising:
a model training module for training an automated driving model using samples obtained by a method according to any one of the first aspect.
In a fifth aspect, there is provided a computer system comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the method of any of the first aspects above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
1. according to the sample obtaining method, the sample obtaining device and the sample obtaining system, the road condition images shot by the road side shooting equipment or the vehicle-mounted shooting equipment which are collected by the vehicle-mounted unit in real time are obtained, the samples are generated, the collection difficulty of the images in the automatic driving model training is simplified, the road condition images are obtained in real time through interaction of the shooting equipment, the vehicle-mounted unit and the road side unit, the model can be trained in time, and the model meets the driving requirements on complex road conditions.
2. The sample obtaining method, the sample obtaining device and the sample obtaining system provided by the embodiment of the invention are used as a supplement for determining the sample label by adopting the classification model, solve the problem that some pictures can not determine the label through the classification model, improve the efficiency and the accuracy compared with manual marking, enable a user to complete identity verification and marking based on a mode of generating the picture verification code, do not need to purchase manual marking service, and save the sample obtaining cost.
3. The sample acquisition method, the sample acquisition device and the sample acquisition system disclosed by the embodiment of the invention are also suitable for the sample acquisition and the sample acquisition device in a specific scene, and are more suitable for the training of an automatic driving model compared with the traditional sample acquisition method.
4. The model training method and the model training device disclosed by the embodiment of the invention are more suitable for model training of automatic driving and are suitable for various complex traffic scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a sample acquisition method provided by an embodiment of the present invention;
fig. 2 is a flowchart of a method for generating a picture verification code according to an embodiment of the present invention;
FIG. 3 is a flow chart of a model training method provided by an embodiment of the invention;
FIG. 4 is a schematic structural diagram of a sample acquiring device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an apparatus interaction provided by an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a model training apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The automatic driving neural network model is mainly used for determining a vehicle driving decision according to the road condition image so as to control the vehicle to automatically drive according to the road condition. In order to make an automatic driving neural network model make an accurate decision according to road condition pictures, massive labeled road condition pictures are required to be used as samples to train the automatic driving neural network model, but in the prior art, marking of the road condition pictures is often inaccurate and low in efficiency, and the training effect of the model is influenced.
Based on the technical problems, the embodiment of the invention discloses a sample acquisition method, a training method, a device and a system for an automatic driving model, and the specific technical scheme is as follows:
as shown in fig. 1, a sample acquisition method for an automatic driving model includes:
and S11, acquiring road condition images, wherein the road condition images are shot by vehicle-mounted shooting equipment or road side shooting equipment.
The road condition image is an image representing a road condition environment and a traffic environment, and can be a picture captured by a vehicle-mounted shooting device such as a vehicle data recorder or a road side shooting device such as a traffic monitoring camera, a shot picture or a video.
In one embodiment, step S11 is specifically: and acquiring road condition images from the vehicle-mounted unit through the road side unit or the communication device, wherein the vehicle-mounted unit is in communication connection with the vehicle-mounted shooting equipment or the road side shooting equipment.
The roadside Unit is a device which is installed at the roadside in an ETC system, communicates with an On-Board Unit (OBU) by adopting a DSRC (dedicated Short Range communication) technology, and realizes vehicle identity recognition and electronic deduction. The communication device may be a 5G communication device, or a WIFI communication device.
In one embodiment, the road condition image is an image desensitized by the on-board unit.
And S12, classifying the road condition images and determining the images to be verified.
The classification of the road condition images is mainly to classify the road condition images in advance through a classification model, and after classification, the classes to which the road condition images possibly belong and the classification confidence degrees of the classes to which the road condition images belong are obtained. The road condition image may be classified according to traffic indication, for example: traffic lights, traffic signs, road blocks, etc., and may also be classified according to driving relationships between vehicles, for example: the overtaking of the rear vehicle, the lane change of the adjacent vehicle and the like. The image combination may be a group of pictures consisting of a plurality of pictures, for example, a nine-grid group of pictures consisting of nine pictures. The picture verification code comprises an image combination and prompt information, and the prompt information is related to the category of the road condition image.
As shown in fig. 2, in one embodiment, the step S12 of classifying the road condition images and determining the image to be verified includes:
and S121, detecting the target in the road condition image.
And S122, classifying the road condition image according to the target by using the classification model to obtain the classification confidence of the road condition image in each class.
And S123, comparing the classification confidence with the classification confidence condition, if the classification meeting the classification confidence condition exists, determining the road condition image as a classified image, determining the label of the classified image according to the classification, and if the classification meeting the classification confidence condition does not exist, determining the road condition image as an unclassified image and the unclassified image as an image to be verified.
The target detection in step S121 may specifically include: and separating the foreground and the background of the road condition image, and extracting the target by adopting a target detection algorithm. For example: R-CNN, Faster R-CNN, YOLO, and the like. Taking the fast R-CNN target detection algorithm as an example, extracting feature vectors of an image through a convolution layer, determining a rectangular selected area to be verified in a road condition image by adopting a Region pro-polar Network, extracting feature vectors of the rectangular selected area in a ROI Pooling layer, finally performing Region selection rectangular regression to obtain an accurate rectangular selected area, and determining the image in the rectangular selected area as a target.
After the classification model in step S122 classifies, the classification confidence of the road condition image for each category is output. In the prior art, a machine learning model classification method is generally adopted for classifying road condition images, but the machine learning model cannot classify all road condition images, and classification confidence degrees of part of road condition images in various classes cannot meet confidence degree conditions, so that the classes cannot be determined.
In step S123, the classification confidence condition may be set such that the classification confidence of the road condition image in a certain category is not less than 95%. And determining the road condition image as a classified image as long as the classification confidence of the category corresponding to the road condition image meets the classification confidence condition, and determining the road condition image as an unclassified image in other cases. The subsequent step of generating the image verification code is mainly to determine the category for the unclassified image.
And S13, generating an image combination containing the image to be verified, and generating a picture verification code according to the image combination for user verification.
As shown in fig. 2, in one embodiment, step S13 includes:
s131, sorting the corresponding classes of the unclassified images according to the classification confidence degrees, taking the class with the highest sorting as a first verification class of the unclassified images, and taking the class meeting the sorting condition as a second verification class of the unclassified images.
S132, according to a first combination rule, combining the unclassified image, the classified image belonging to the first verification category and the classified image belonging to the second verification category to generate an image combination.
And S133, generating a picture verification code according to the image combination.
Step S131 is a step of analyzing a category corresponding to the unclassified image, where the category with the highest classification confidence is the most likely category to which the unclassified image belongs, and therefore the subsequently generated picture verification code is mainly used to determine whether the unclassified image belongs to the category. The category meeting the sorting condition is a verification category of the unclassified image, and is used for judging whether the verification operation of the user is accurate or not in the subsequent image verification.
In step S132, the first combination rule includes: the total number of road condition images contained in one image combination, the specification of each road condition image, the number of classified images and the number of unclassified images which need to be set in the image combination.
And generating an image combination, taking the image combination for generating the nine-square grid as an example, if the sorting condition is that the classification confidence degrees are the second and third categories and the category corresponding to the image which is not classified, the category with the maximum classification confidence degree is a traffic signal lamp group, the second category is a traffic sign plate group, and the third category is a roadblock group. The road condition picture finally forming the nine-check image combination comprises the following steps: 4 pictures of a traffic signal lamp group in the unclassified images, 2 pictures of a traffic signal lamp group in the classified images, and 3 pictures belonging to a traffic sign group or a roadblock group in the classified images.
It should be noted that, the above embodiment is an optimal choice for the method disclosed in the present invention, and the present invention may also combine only an unclassified image with a classified image belonging to the first verification category in the process of generating an image combination.
In step S133, generating a picture verification code further includes:
generating prompt information according to the category of the image to be verified; and generating a picture verification code according to the prompt information and the image combination.
The prompt information is generated according to a first verification category, for example, if the first verification category of the unclassified image is a "traffic light" group, the generated indication information is: please select a picture containing "traffic light".
As shown in fig. 2, in one embodiment, as another case, for some specific traffic scenarios, for example: in the process of image classification, the time limit on the speed limit sign needs to be recognized in addition to the speed limit sign. Therefore, for such a specific scene, in addition to identifying the category of the road condition image, it is also necessary to further identify a more precise target range in the road condition image. Based on this, in step S13, after determining the type of the road condition image, generating an image combination including an image to be verified, and generating a picture verification code according to the image combination, the method further includes:
and S134, judging whether the category corresponding to the classified image is a specific category, if so, generating an image combination according to the classified image, and generating a picture verification code according to the image combination.
In one embodiment, step S134 further comprises:
judging whether the classified images belonging to the specific category are images with unknown accurate range;
if so, determining the unknown precise range image as an image to be verified, generating a first sub-image combination according to the unknown precise range image and a second combination rule, otherwise, generating a second sub-image combination according to the second combination rule;
combining the first sub-image combination and the second sub-image combination to generate the image combination;
and generating a picture verification code according to the image combination.
The determining step S134 of whether the category corresponding to the classified image is a specific category includes: and matching the category corresponding to the classified image with the specific category table, and if the matching is successful, judging the category of the classified image to be the specific category. In the above, the unknown accurate range image is an image in which the accurate target is not detected in the road condition image in the classification step, and if not, the unknown accurate range image is an image in which the accurate target is detected in the classification step.
Wherein the second combination rule comprises: the total number of road condition images contained in one image combination, the specification of each road condition image, the number of unknown accurate range images required to be set in the image combination and the number of known accurate range images. Generating a picture combination according to the first sub-image combination and the second sub-image combination comprises: combining the first sub-image combination and the second sub-image combination according to a third combination rule, the third combination rule comprising: the number of the first sub-image combinations, and the number of the second sub-image combinations. The second sub-image combination verifies the accuracy of the user in completing the picture verification operation, and therefore whether the mark of the user on the first sub-image combination is correct or not is judged.
In step S134, generating a picture verification code includes:
generating prompt information according to the specific category to which the image with the unknown accurate range belongs; and generating a picture verification code according to the prompt information and the image combination.
And S14, receiving a mark image combination returned by the user after the user completes verification, determining a label of an image to be verified according to the mark image combination, and taking a road condition image with the determined label as a sample.
In one embodiment, step S14 includes:
and calculating the marking confidence of the image to be verified in the marking image combination, and determining a label for the image to be verified meeting the marking confidence condition.
Specifically, the calculation formula of the confidence level of the label in the above step is:
Figure 327989DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 126181DEST_PATH_IMAGE002
the marking confidence of the image Q to be verified in the marking image combination returned by the user k;
Figure 230141DEST_PATH_IMAGE003
the mark recognition result of the image to be verified Q in the mark image combination returned for the user i, 1 denotes a mark, 0 denotes an unmarked,
Figure 430178DEST_PATH_IMAGE004
the marking result of the image Q to be verified in the marking image combination returned by the user k;
Figure 830067DEST_PATH_IMAGE005
the true-false rate of the marked image combination returned by the user i, namely the number of the unmarked images in the total number of the road condition images contained in the marked image combination is proportional to the total number of the unmarked images,
Figure 115555DEST_PATH_IMAGE006
marking the true-false rate of the image combination for the user k;
Figure 524670DEST_PATH_IMAGE007
true positive rate of a marked image combination returned for user i, i.e. contained by the marked image combinationIn the total quantity of road condition images, the quantity of the marked images is proportional,
Figure 579214DEST_PATH_IMAGE008
the true positive rate of the marked image combination returned for the user k;
Figure 150004DEST_PATH_IMAGE009
in order to verify that the mark accuracy of the user affects the reliability weighting scale factor on the mark accuracy of other users, the mark accuracy of the user can be set by self;
Figure 657209DEST_PATH_IMAGE010
in order to be the true-negative confidence weight,
Figure 102971DEST_PATH_IMAGE011
for true positive confidence weights, the probability of a positive, advisably,
Figure 12021DEST_PATH_IMAGE012
i is a user index value, i belongs to [1, m ], k is a user k, k is more than or equal to 1 and less than or equal to m (m is a positive integer more than or equal to 1);
Figure 19292DEST_PATH_IMAGE013
the confidence coefficient can be set by self-definition.
In the above formula: labeling confidence of picture Q to be verified for a certain user k
Figure 13792DEST_PATH_IMAGE014
The method is influenced by two parts, wherein the first part is the true negative rate TNR and the true positive rate TPR of the user k to the image marking result of other known classes in the same verification combination, the two indexes can reflect the reliability of the judgment capability of the user k, the TNR and the TPR have different influence weights on the marking confidence coefficient under the condition that the marking result of the user k to the image Q is different, and when the marking result is positive (1)TNR is higher, TPR is higher when the labeling result is negative (0). By using
Figure 764711DEST_PATH_IMAGE015
And
Figure 528267DEST_PATH_IMAGE011
this weight effect is reflected as an index of TNR and TPR. The second part is the confidence of the marking result of other users except the user k for the picture Q to the marking
Figure 706439DEST_PATH_IMAGE014
If the other users mark the picture Q as a result
Figure 188236DEST_PATH_IMAGE003
Tagging of picture Q with user k
Figure 621141DEST_PATH_IMAGE016
The effect of the similarity increasing the confidence of the label, and the difference decreasing the confidence of the label
Figure 911308DEST_PATH_IMAGE003
And
Figure 385015DEST_PATH_IMAGE004
and OR logic reflection. Other per-user labeling result versus labeling confidence for picture Q
Figure 728009DEST_PATH_IMAGE014
The influence of (2) is also related to the confidence level of the self judgment ability of the user, and the confidence level of the user to the mark is lower when the self judgment ability is more reliable
Figure 945364DEST_PATH_IMAGE014
The smaller the resulting effect, the more reflective it is by TNR and TPR. User k self judgment ability credibility and other user judgment ability credibility pair mark credibility
Figure 824458DEST_PATH_IMAGE014
Has weight difference, and the weight difference adopts
Figure 344432DEST_PATH_IMAGE009
And (4) reflecting. Then the confidence of the label
Figure 800821DEST_PATH_IMAGE014
Determining capability confidence for user k itself
Figure 697233DEST_PATH_IMAGE017
Multiplying by weight plus the labeling result of all other users except k
Figure 555468DEST_PATH_IMAGE003
The influence of the marking confidence coefficient is multiplied by the sum of the judgment capability confidence coefficient of each user, and then the sum is divided by the weight and the sum of the marking confidence coefficients of other users for normalization so as to ensure
Figure 479299DEST_PATH_IMAGE014
In [0,1 ]]In the range of, finally open
Figure 422984DEST_PATH_IMAGE013
Root number for parameter tuning.
As shown in fig. 3, based on the sample obtaining method, an embodiment of the present invention further provides an automatic driving model training method, including:
and S31, training an automatic driving model by using the sample obtained by the sample obtaining method to obtain an automatic driving decision.
As shown in fig. 4, based on the sample obtaining method, an embodiment of the present invention further provides a sample obtaining apparatus for an automatic driving model, including:
the image acquisition module 401 is configured to acquire a road condition image, where the road condition image is captured by a vehicle-mounted capturing device or a road side capturing device.
In an embodiment, the image obtaining module 401 is specifically configured to obtain the road condition image from an on-board unit through a road side unit or a communication device, where the on-board unit is in communication connection with an on-board shooting device or a road side shooting device.
As shown in fig. 5, the image acquisition module and the on-board unit are connected with the on-board unit through the road side unit or the communication device, and the on-board unit is connected with the on-board shooting device or the road side shooting device. Therefore, when the roadside apparatus captures an image, the image may be transmitted to the on-board unit using the V2X technique, and the on-board unit uploads the image to the image acquisition module 401 via the roadside unit or the 5G communication device after desensitization processing of the image.
The classification module 402 is configured to classify the road condition image and determine an image to be verified.
In one embodiment, the classification module 402 includes:
and the target detection module is used for detecting the target in the road condition image.
And the image classification module is used for classifying the road condition images according to the targets by utilizing the classification model to obtain the classification confidence of the road condition images in each class.
And the classification output module is used for comparing the classification confidence with the confidence condition, determining the road condition image as a classified image if the classification meeting the confidence condition exists, determining the label of the classified image according to the classification, and determining the road condition image as an unclassified image if the classification meeting the confidence condition does not exist, wherein the unclassified image is an image to be verified.
And an authentication code generation module 403, configured to generate an image combination including the image with authentication, and generate a picture authentication code according to the image combination, for user authentication.
In one embodiment, to further determine a category for the unclassified image, the verification code generation module 403 includes:
and the classification confidence coefficient analysis module is used for sequencing the classes corresponding to the unclassified images according to the classification confidence coefficient, taking the class with the highest sequencing as a first verification class of the unclassified images, and taking the class meeting the sequencing condition as a second verification class of the unclassified images.
And the image combination generation module is used for combining the unclassified image, the classified image belonging to the first verification category and the classified image belonging to the second verification category according to a first combination rule to generate an image combination.
And the picture verification code generation module is used for generating a picture verification code according to the image combination.
Wherein the first combination rule comprises: the total number of road condition images contained in one image combination, the specification of each road condition image, the number of classified images and the number of unclassified images which need to be set in the image combination.
In an embodiment, the picture verification code generation module is specifically configured to generate prompt information according to the first verification category, and generate the picture verification code according to a combination of the prompt information and the image.
In an embodiment, as another situation, in order to meet the driving requirement of a specific traffic scene, the verification code generation module 403 is further configured to determine whether a category corresponding to the classified image is a specific category, and if so, generate an image combination according to the classified image belonging to the specific category, and generate the picture verification code according to the image combination.
Further, the verification code generation module 403 further includes:
and the judging module is used for judging whether the classified images belonging to the specific category are images within unknown accurate range, and the images within unknown accurate range are images to be verified.
The image combination generating module is further used for generating a first sub-image combination according to the unknown accurate range image according to a second combination rule when the classified image is the unknown accurate range image, and generating a second sub-image combination according to the known accurate range image according to the second combination rule when the classified image is the known accurate range image;
combining the first sub-image combination and the second sub-image combination to generate the image combination.
Wherein the second combination rule comprises: the total number of road condition images contained in one image combination, the specification of each road condition image, the number of unknown accurate range images required to be set in the image combination and the number of known accurate range images.
In one embodiment, the image verification code generation module is specifically configured to generate prompt information according to a specific category to which an image with an unknown accurate range belongs; and generating a picture verification code according to the prompt information and the image combination.
And the label determining module 404 is configured to receive a mark image combination returned after the user is completely verified, determine a label of an image to be verified according to the mark image combination, and determine a road condition image of the label as a sample.
In one embodiment, the tag determination module 404 includes:
and the marking confidence coefficient calculation module is used for calculating the marking confidence coefficient of the image to be verified in the marking image combination.
The label determination module 404 is further configured to determine a label for the image to be verified that satisfies the labeling confidence condition.
Specifically, the calculation formula of the marking confidence coefficient is as follows:
Figure 857508DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 570249DEST_PATH_IMAGE002
the marking confidence of the image Q to be verified in the marking image combination returned by the user k;
Figure 432026DEST_PATH_IMAGE018
the mark recognition result of the image to be verified Q in the mark image combination returned for the user i, 1 denotes a mark, 0 denotes an unmarked,
Figure 597428DEST_PATH_IMAGE004
the marking result of the image Q to be verified in the marking image combination returned by the user k;
Figure 570063DEST_PATH_IMAGE005
truth of tagged image combinations returned for user iThe negative rate, that is, the number of the unlabeled images in the total number of road condition images contained in the labeled image combination is proportional to the total number of the unlabeled images,
Figure 137311DEST_PATH_IMAGE006
marking the true-false rate of the image combination for the user k;
Figure 686102DEST_PATH_IMAGE007
the true positive rate of the marked image combination returned by the user i, namely the number of the marked images in the total number of road condition images contained in the marked image combination is proportional to the number of the marked images,
Figure 338800DEST_PATH_IMAGE008
the true positive rate of the marked image combination returned for the user k;
Figure 380706DEST_PATH_IMAGE009
in order to verify that the mark accuracy of the user affects the reliability weighting scale factor on the mark accuracy of other users, the mark accuracy of the user can be set by self;
Figure 802460DEST_PATH_IMAGE010
in order to be the true-negative confidence weight,
Figure 6039DEST_PATH_IMAGE011
for true positive confidence weights, the probability of a positive, advisably,
Figure 146033DEST_PATH_IMAGE012
i is a user index value, i belongs to [1, m ], k is a user k, k is more than or equal to 1 and less than or equal to m (m is a positive integer more than or equal to 1);
Figure 460471DEST_PATH_IMAGE013
the confidence coefficient can be set by self-definition.
As shown in fig. 6, based on the sample obtaining method, an embodiment of the present invention further provides an automatic driving model training apparatus, including:
a model training module 601, configured to train an automatic driving model using a sample obtained by the sample obtaining method disclosed in any of the embodiments above, so as to obtain an automatic driving decision.
As shown in fig. 7, based on the above embodiment of the method for obtaining a sample for an automatic driving model, an embodiment of the present invention further provides a computer system, including:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the above-described automated driving model sample acquisition method.
Fig. 7 illustrates an architecture of a computer system, which may include, in particular, a processor 710, a video display adapter 711, a disk drive 712, an input/output interface 713, a network interface 714, and a memory 720. The processor 710, the video display adapter 711, the disk drive 712, the input/output interface 713, the network interface 714, and the memory 720 may be communicatively coupled via a communication bus 730.
The processor 710 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided in the present Application.
The Memory 720 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 720 may store an operating system 721 for controlling the operation of the electronic device 700, a basic input output system 722(BIOS) for controlling low-level operations of the electronic device 700. In addition, a web browser 723, a data storage management system 724, a device identification information processing system 725, and the like may also be stored. The device identification information processing system 725 may be an application program that implements the operations of the foregoing steps in this embodiment of the present application. In summary, when the technical solution provided by the present application is implemented by software or firmware, the relevant program codes are stored in the memory 720 and called for execution by the processor 710.
The input/output interface 713 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The network interface 714 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 730 includes a path that transfers information between the various components of the device, such as processor 710, video display adapter 711, disk drive 712, input/output interface 713, network interface 714, and memory 720.
In addition, the electronic device 700 may also obtain information of specific pickup conditions from a virtual resource object pickup condition information database for performing condition judgment, and the like.
It should be noted that although the above-mentioned devices only show the processor 710, the video display adapter 711, the disk drive 712, the input/output interface 713, the network interface 714, the memory 720, the bus 730, etc., in a specific implementation, the devices may also include other components necessary for normal operation. Furthermore, it will be understood by those skilled in the art that the apparatus described above may also include only the components necessary to implement the solution of the present application, and not necessarily all of the components shown in the figures.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially implemented or the portions contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments or some portions of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are merely illustrative, wherein units described as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
1. according to the sample obtaining method, the sample obtaining device and the sample obtaining system, the road condition images shot by the road side shooting equipment or the vehicle-mounted shooting equipment which are collected by the vehicle-mounted unit in real time are obtained, the samples are generated, the collection difficulty of the images in the automatic driving model training is simplified, the road condition images are obtained in real time through interaction of the shooting equipment, the vehicle-mounted unit and the road side unit, the model can be trained in time, and the model meets the driving requirements on complex road conditions.
2. The sample obtaining method, the sample obtaining device and the sample obtaining system provided by the embodiment of the invention are used as a supplement for determining the sample label by adopting the classification model, solve the problem that some pictures can not determine the label through the classification model, improve the efficiency and the accuracy compared with manual marking, enable a user to complete identity verification and marking based on a mode of generating the picture verification code, do not need to purchase manual marking service, and save the sample obtaining cost.
3. The sample acquisition method, the sample acquisition device and the sample acquisition system disclosed by the embodiment of the invention are also suitable for the sample acquisition and the sample acquisition device in a specific scene, and are more suitable for the training of an automatic driving model compared with the traditional sample acquisition method.
4. The model training method and the model training device disclosed by the embodiment of the invention are more suitable for model training of automatic driving and are suitable for various complex traffic scenes.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A method for obtaining a sample for an automatic driving model, comprising:
acquiring a road condition image, wherein the road condition image is shot by vehicle-mounted shooting equipment or road side shooting equipment;
detecting a target in the road condition image, classifying the road condition image according to the target by using a classification model, obtaining classification confidence of the road condition image in each class, comparing the classification confidence with a classification confidence condition, if the class meeting the classification confidence condition exists, determining the road condition image as a classified image, determining a label of the classified image according to the class, and if the class meeting the road condition image does not exist, determining the road condition image as an unclassified image;
judging whether the category corresponding to the classified image is a specific category, if so, judging whether the classified image belonging to the specific category is an unknown precise range image, if so, determining that the classified image is the unknown precise range image, and the specific category is the category of the road condition image with a target range;
determining the unclassified image or the unknown precise range image as an image to be verified;
generating an image combination containing the image to be verified, generating an image verification code according to the image combination, and using the image verification code for user verification, wherein the image combination comprises:
when the image to be verified is the unknown precise range image, generating a first sub-image combination according to the unknown precise range image and a second combination rule for the unknown precise range image in the classified images belonging to the specific class,
for a non-unknown precise range image in the classified images belonging to the specific class, generating a second sub-image combination according to the second combination rule according to the images of the non-unknown precise range image,
combining the first sub-image combination and the second sub-image combination to generate the image combination,
generating the picture verification code according to the image combination;
and receiving a mark image combination returned after the user completes verification, determining the label of the image to be verified according to the mark image combination, and taking the road condition image with the determined label as a sample.
2. The method of claim 1, wherein generating an image combination including the image to be verified, generating a picture verification code from the image combination, further comprises:
when the image to be verified is the unclassified image, sorting the classes corresponding to the unclassified image according to the classification confidence degrees, taking the class with the highest sorting as a first verification class of the unclassified image, and taking the class meeting the sorting condition as a second verification class of the unclassified image;
combining the unclassified image with the classified images belonging to the first verification category and the classified images belonging to the second verification category according to a first combination rule to generate the image combination;
and generating a picture verification code according to the image combination.
3. The method of claim 1 or 2, wherein determining the label of the image to be authenticated from the combination of marked images comprises:
and calculating the marking confidence of the image to be verified in the marking image combination, and determining a label for the image to be verified meeting the marking confidence condition.
4. An automated driving model training method, comprising:
training an autopilot model using samples obtained by a method according to any one of claims 1 to 3.
5. A sample acquisition apparatus for an automatic driving model, characterized by comprising:
the road condition image acquisition module is used for acquiring a road condition image, and the road condition image is shot by vehicle-mounted shooting equipment or road side shooting equipment;
a classification module comprising:
the target detection module is used for detecting targets in the road condition images, classifying the road condition images according to the targets by utilizing a classification model, and obtaining classification confidence coefficients of the road condition images in various categories;
the classification data module is used for comparing the classification confidence coefficient with a classification confidence coefficient condition, determining the road condition image as a classified image if a category meeting the classification confidence coefficient condition exists, determining a label of the classified image according to the category, and determining the road condition image as an unclassified image if the category meeting the road condition image does not exist;
the verification code generation module is used for judging whether the category corresponding to the classified image is a specific category or not, generating an image combination containing an image to be verified, generating a picture verification code according to the image combination, and verifying a user, and comprises:
a judging module, configured to judge whether a classified image belonging to a specific category is an unknown precise range image when the classified image is the specific category, determine that the classified image is an unknown precise range image if the classified image is the unknown precise range image, determine that the unclassified image or the unknown precise range image is the image to be verified, and determine that the specific category is a category of the road condition image with a target range;
an image combination generating module, configured to generate a first sub-image combination according to the unknown precise range image according to a second combination rule for the unknown precise range image in the classified images belonging to the specific class when the image to be verified is the unknown precise range image,
for a non-unknown precise range image in the classified images belonging to the specific class, generating a second sub-image combination according to the second combination rule according to the images of the non-unknown precise range image,
combining the first sub-image combination and the second sub-image combination to generate the image combination,
generating the picture verification code according to the image combination;
and the label determining module is used for receiving a mark image combination returned by the user after the verification is finished, determining the label of the image to be verified according to the mark image combination, and taking the road condition image with the determined label as a sample.
6. An autonomous driving model training apparatus, comprising:
a model training module for training an autopilot model using samples obtained by the method of any one of claims 1 to 3.
7. A computer system, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform the method of any of claims 1-3 above.
CN202011129255.XA 2020-10-21 2020-10-21 Sample acquisition method, training method, device and system for automatic driving model Active CN111967450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011129255.XA CN111967450B (en) 2020-10-21 2020-10-21 Sample acquisition method, training method, device and system for automatic driving model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011129255.XA CN111967450B (en) 2020-10-21 2020-10-21 Sample acquisition method, training method, device and system for automatic driving model

Publications (2)

Publication Number Publication Date
CN111967450A CN111967450A (en) 2020-11-20
CN111967450B true CN111967450B (en) 2021-02-26

Family

ID=73387096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011129255.XA Active CN111967450B (en) 2020-10-21 2020-10-21 Sample acquisition method, training method, device and system for automatic driving model

Country Status (1)

Country Link
CN (1) CN111967450B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541531A (en) * 2020-12-02 2021-03-23 武汉光庭信息技术股份有限公司 System and method for acquiring and processing road video data
CN112560992B (en) * 2020-12-25 2023-09-01 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for optimizing picture classification model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205503A (en) * 2015-08-28 2015-12-30 重庆恢恢信息技术有限公司 Crowdsourcing-active-learning-based method for detecting abnormal picture
CN107832662A (en) * 2017-09-27 2018-03-23 百度在线网络技术(北京)有限公司 A kind of method and system for obtaining picture labeled data
CN109460652A (en) * 2018-11-09 2019-03-12 连尚(新昌)网络科技有限公司 For marking the method, equipment and computer-readable medium of image pattern
CN111369005A (en) * 2018-12-26 2020-07-03 杭州芄兰科技有限公司 Crowdsourcing marking system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101295305B (en) * 2007-04-25 2012-10-31 富士通株式会社 Image retrieval device
CN110378396A (en) * 2019-06-26 2019-10-25 北京百度网讯科技有限公司 Sample data mask method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205503A (en) * 2015-08-28 2015-12-30 重庆恢恢信息技术有限公司 Crowdsourcing-active-learning-based method for detecting abnormal picture
CN107832662A (en) * 2017-09-27 2018-03-23 百度在线网络技术(北京)有限公司 A kind of method and system for obtaining picture labeled data
CN109460652A (en) * 2018-11-09 2019-03-12 连尚(新昌)网络科技有限公司 For marking the method, equipment and computer-readable medium of image pattern
CN111369005A (en) * 2018-12-26 2020-07-03 杭州芄兰科技有限公司 Crowdsourcing marking system

Also Published As

Publication number Publication date
CN111967450A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN104424466B (en) Method for checking object, body detection device and image pick up equipment
CN113435546B (en) Migratable image recognition method and system based on differentiation confidence level
CN109977191B (en) Problem map detection method, device, electronic equipment and medium
CN105574550A (en) Vehicle identification method and device
CN111967450B (en) Sample acquisition method, training method, device and system for automatic driving model
CN103295024A (en) Method and device for classification and object detection and image shoot and process equipment
WO2021184628A1 (en) Image processing method and device
CN113822247A (en) Method and system for identifying illegal building based on aerial image
CN111522951A (en) Sensitive data identification and classification technical method based on image identification
WO2022062968A1 (en) Self-training method, system, apparatus, electronic device, and storage medium
CN109993032A (en) A kind of shared bicycle target identification method, device and camera
CN116964588A (en) Target detection method, target detection model training method and device
CN111461143A (en) Picture copying identification method and device and electronic equipment
Ganapathy et al. A Malaysian vehicle license plate localization and recognition system
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN111784053A (en) Transaction risk detection method, device and readable storage medium
Ciuntu et al. Real-time traffic sign detection and classification using machine learning and optical character recognition
CN116384844A (en) Decision method and device based on geographic information cloud platform
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
CN114550129B (en) Machine learning model processing method and system based on data set
CN116468931A (en) Vehicle part detection method, device, terminal and storage medium
CN110765900A (en) DSSD-based automatic illegal building detection method and system
CN114187625A (en) Video detection method based on video source automatic detection technology
CN114140025A (en) Multi-modal data-oriented vehicle insurance fraud behavior prediction system, method and device
CN113033518B (en) Image detection method, image detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Applicant after: Ningbo Junlian Zhixing Technology Co.,Ltd.

Address before: 4 / F, building 5, 555 Dongqing Road, hi tech Zone, Ningbo City, Zhejiang Province

Applicant before: Ningbo Junlian Zhixing Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant