CN112070049A - Semantic segmentation method under automatic driving scene based on BiSeNet - Google Patents

Semantic segmentation method under automatic driving scene based on BiSeNet Download PDF

Info

Publication number
CN112070049A
CN112070049A CN202010972176.9A CN202010972176A CN112070049A CN 112070049 A CN112070049 A CN 112070049A CN 202010972176 A CN202010972176 A CN 202010972176A CN 112070049 A CN112070049 A CN 112070049A
Authority
CN
China
Prior art keywords
bisenet
semantic segmentation
image
image data
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010972176.9A
Other languages
Chinese (zh)
Other versions
CN112070049B (en
Inventor
柯逍
蒋培龙
黄艳艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202010972176.9A priority Critical patent/CN112070049B/en
Publication of CN112070049A publication Critical patent/CN112070049A/en
Application granted granted Critical
Publication of CN112070049B publication Critical patent/CN112070049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a semantic segmentation method under an automatic driving scene based on BiSeNet, which comprises the following steps: step S1: collecting and preprocessing image data of urban streets; step S2: labeling the preprocessed image data to obtain labeled image data; step S3: performing data enhancement on the marked image data, and taking the enhanced image data as a training set; step S4: constructing a BiSeNet neural network model, and training the model based on a training set; step S5: preprocessing video information acquired by the camera, and performing semantic segmentation on an urban street in the camera according to the trained BiSeNet neural network model. The invention can effectively improve the safety of automatic driving and the accuracy and rapidity of road scene segmentation.

Description

Semantic segmentation method under automatic driving scene based on BiSeNet
Technical Field
The invention relates to the field of pattern recognition and computer vision, in particular to a semantic segmentation method in an automatic driving scene based on BiSeNet.
Background
Image semantic segmentation is an essential part of modern autonomous driving systems, since accurate understanding of the scene around the car is critical for navigation and action planning. Semantic segmentation may help autonomous vehicles identify travelable regions in a picture. Starting from the emergence of Full Convolutional Networks (FCN), Convolutional neural Networks are becoming the mainstream methods for handling semantic segmentation tasks, and many of them are directly referred from Convolutional neural network methods in other fields. Many researchers have done much effort in the last decade in the creation of semantically segmented data sets and algorithmic improvements. Thanks to the development of deep learning theory, many advances have been made in the sub-field of visual scene understanding. The disadvantage of deep learning is that it requires a large amount of annotation data and is therefore time consuming, but it conceals yoga.
Disclosure of Invention
In view of the above, the present invention provides a semantic segmentation method in an automatic driving scene based on BiSeNet, which can effectively improve safety of automatic driving and accuracy and rapidity of segmenting a road scene.
In order to achieve the purpose, the invention adopts the following technical scheme:
a semantic segmentation method under an automatic driving scene based on BiSeNet comprises the following steps:
step S1: collecting and preprocessing image data of urban streets;
step S2: labeling the preprocessed image data to obtain labeled image data;
step S3: performing data enhancement on the marked image data, and taking the enhanced image data as a training set;
step S4: constructing a BiSeNet neural network model, and training the model based on a training set;
step S5: preprocessing video information acquired by the camera, and performing semantic segmentation on an urban street in the camera according to the trained BiSeNet neural network model.
Further, the step S1 is specifically:
step S11, analyzing the category needing semantic segmentation under the scene of urban street;
s12, collecting city street images;
and S13, preprocessing the collected urban street images based on the semantic segmentation categories obtained in the S11, and removing pictures which do not meet preset requirements.
Further, the semantically segmented categories specifically include highways, sidewalks, parking lots, railways, people, cars, trucks, buses, trains, motorcycles, bicycles, caravans, trailers, buildings, walls, fences, guardrails, bridges, tunnels, poles, traffic signs, traffic lights, foliage, sky, and others.
Further, the step S2 is specifically:
step S21: framing a category edge for each image by using labelme, and storing the position information and classification information of a polygonal frame in a json file;
step S22: and uniformly generating files meeting preset requirements by utilizing labelme according to the json files generated by labeling.
Further, the file generated in step S22 includes a jpeg original image, a semantic division type mask image, and a semantic division type visual image.
Further, the step S3 specifically includes:
turning and converting all pictures in the marked image data, correspondingly changing a mask picture, and adding the pictures subjected to turning and converting into a new data set;
color dithering is carried out on all pictures in the marked image data, the corresponding mask picture is not changed, and the pictures subjected to color dithering are added into a new data set;
all pictures in the marked image data are subjected to translation transformation, the corresponding mask picture is correspondingly changed, and the pictures subjected to translation transformation are added into a new data set;
and performing contrast transformation on all pictures in the marked image data, wherein the corresponding mask picture is not changed, and the pictures subjected to the contrast transformation are added into a new data set.
Further, the training of the BiSeNet neural network model is specifically as follows:
step S41, training by adopting a deep learning frame Pythrch and setting initial parameters;
step S42, realizing the size change of the tensor by changing the step size of the convolution kernel in the forward propagation;
step S43, adding an auxiliary loss function to guide the training process;
step S44: calculating a weight value and a bias value after the convolutional neural network is updated by adopting a random gradient descent method;
step S45: after N training iterations, the learning rate is adjusted to 10-4Continuing training;
step S46: and stopping training after iteration reaches a preset value, and storing the trained model.
Further, the auxiliary loss function is specifically:
Figure BDA0002684479550000041
where L (X; W) is the joint loss function, Lp(X; W) is the main loss function, li(X; W) is an auxiliary loss function, X is the image input, W is the model parameter, X is the model parameteriThe signature graph output in the i-th stage.
Further, the step S5 is specifically:
step S51: extracting each frame in video information collected by a camera as an input image;
step S52: adjusting the input image to a preset size;
step S53: obtaining a prediction graph from the image obtained in the step S53 through a BiSeNet neural network model;
step S54: scaling the prediction image to obtain the original resolution of the camera as a comparison image;
step S55: and fusing the comparison image and the original image to generate a final result image.
Compared with the prior art, the invention has the following beneficial effects:
1. the method can effectively perform semantic segmentation on the street image, and improves the semantic segmentation effect.
2. The invention combines the training loss function, accelerates the training speed, has better convergence and has the advantage of smaller model volume.
3. The invention has higher accuracy and rapidity when processing the image video data with higher resolution.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides a semantic segmentation method in an automatic driving scenario based on BiSeNet, which includes the following steps:
step S1: collecting and preprocessing image data of urban streets;
step S2: labeling the preprocessed image data to obtain labeled image data;
step S3: performing data enhancement on the marked image data, and taking the enhanced image data as a training set;
step S4: constructing a BiSeNet neural network model, and training the model based on a training set;
step S5: preprocessing video information acquired by the camera, and performing semantic segmentation on an urban street in the camera according to the trained BiSeNet neural network model.
Further, the step S1 is specifically:
step S11, analyzing the category needing semantic segmentation under the scene of urban street;
s12, collecting city street images;
and S13, preprocessing the collected urban street images based on the semantic segmentation categories obtained in the S11, and removing pictures which do not meet preset requirements.
In this embodiment, the semantically segmented categories specifically include roads, sidewalks, parking lots, railways, people, cars, trucks, buses, trains, motorcycles, bicycles, caravans, trailers, buildings, walls, fences, guardrails, bridges, tunnels, poles, traffic signs, traffic lights, foliage, sky, and others.
In this embodiment, the step S2 specifically includes:
step S21: framing a category edge for each image by using labelme, and storing the position information and classification information of a polygonal frame in a json file;
step S22: and uniformly generating files meeting preset requirements by utilizing labelme according to the json files generated by labeling. The generated file comprises a jpeg original image, a semantic segmentation class mask image and a semantic segmentation class visual image.
In this embodiment, the step S3 specifically includes:
turning and converting all pictures in the marked image data, correspondingly changing a mask picture, and adding the pictures subjected to turning and converting into a new data set;
color dithering is carried out on all pictures in the marked image data, the corresponding mask picture is not changed, and the pictures subjected to color dithering are added into a new data set;
all pictures in the marked image data are subjected to translation transformation, the corresponding mask picture is correspondingly changed, and the pictures subjected to translation transformation are added into a new data set;
and performing contrast transformation on all pictures in the marked image data, wherein the corresponding mask picture is not changed, and the pictures subjected to the contrast transformation are added into a new data set.
In this embodiment, the training of the BiSeNet neural network model is specifically as follows:
and step S41, training by adopting a deep learning frame Pythrch, and setting the initial parameters as follows:
initial learning rate, i.e., -learning rate: 0.025;
weight decay, namely-weight decay: 0.0005;
momentum, i.e., -momentum: 0.9;
batch size, i.e. -batch size: 16;
step S42, realizing the size change of the tensor by changing the step size of the convolution kernel in the forward propagation;
step S43, in this embodiment, the auxiliary loss is added to guide the training process, so as to accelerate the training speed and make it easier to converge;
the loss function is:
Figure BDA0002684479550000071
where L (X; W) is the joint loss function, Lp(X; W) is the main loss function, li(X; W) is an auxiliary loss function, X is the image input, W is the model parameter, X is the model parameteriThe characteristic diagram output in the ith stage is used for assisting the loss function only in the training stage;
step S44: calculating a weight value and a bias value after the convolutional neural network is updated by adopting a random gradient descent method;
step S45: after training iterations to 10000 times, the learning rate is adjusted to 10-4Continuing training;
step S46: stopping training after 50000 times of iteration, and storing the trained model.
In this embodiment, the step S5 specifically includes:
step S51: extracting each frame in video information collected by a camera as an input image;
step S52: adjusting the input image to a preset size;
step S53: obtaining a prediction graph from the image obtained in the step S53 through a BiSeNet neural network model;
step S54: scaling the prediction image to obtain the original resolution of the camera as a comparison image;
step S55: and fusing the comparison image and the original image to generate a final result image.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (9)

1. A semantic segmentation method under an automatic driving scene based on BiSeNet is characterized by comprising the following steps:
step S1: collecting and preprocessing image data of urban streets;
step S2: labeling the preprocessed image data to obtain labeled image data;
step S3: performing data enhancement on the marked image data, and taking the enhanced image data as a training set;
step S4: constructing a BiSeNet neural network model, and training the model based on a training set;
step S5: preprocessing video information acquired by the camera, and performing semantic segmentation on an urban street in the camera according to the trained BiSeNet neural network model.
2. The semantic segmentation method under the BiSeNet-based automatic driving scene according to claim 1, wherein the step S1 specifically comprises:
step S11, analyzing the category needing semantic segmentation under the scene of urban street;
s12, collecting city street images;
and S13, preprocessing the collected urban street images based on the semantic segmentation categories obtained in the S11, and removing pictures which do not meet preset requirements.
3. The method of semantic segmentation under BiSeNet-based autonomous driving scenarios of claim 2, characterized in that the categories of semantic segmentation specifically include highways, sidewalks, parking lots, railways, people, cars, trucks, buses, trains, motorcycles, bicycles, caravans, trailers, buildings, walls, fences, guardrails, bridges, tunnels, poles, traffic signs, traffic lights, foliage, sky and others.
4. The semantic segmentation method under the BiSeNet-based automatic driving scene according to claim 1, wherein the step S2 specifically comprises:
step S21: framing a category edge for each image by using labelme, and storing the position information and classification information of a polygonal frame in a json file;
step S22: and uniformly generating files meeting preset requirements by utilizing labelme according to the json files generated by labeling.
5. The method for semantic segmentation in the BiSeNet-based automatic driving scene according to claim 4, wherein the files generated in step S22 include jpeg original image, a semantic segmentation class mask image and a semantic segmentation class visual image.
6. The semantic segmentation method under the BiSeNet-based automatic driving scene according to claim 1, wherein the step S3 specifically comprises:
turning and converting all pictures in the marked image data, correspondingly changing a mask picture, and adding the pictures subjected to turning and converting into a new data set;
color dithering is carried out on all pictures in the marked image data, the corresponding mask picture is not changed, and the pictures subjected to color dithering are added into a new data set;
all pictures in the marked image data are subjected to translation transformation, the corresponding mask picture is correspondingly changed, and the pictures subjected to translation transformation are added into a new data set;
and performing contrast transformation on all pictures in the marked image data, wherein the corresponding mask picture is not changed, and the pictures subjected to the contrast transformation are added into a new data set.
7. The method for semantic segmentation in the BiSeNet-based automatic driving scene according to claim 1, wherein the BiSeNet neural network model is trained as follows:
step S41, training by adopting a deep learning frame Pythrch and setting initial parameters;
step S42, realizing the size change of the tensor by changing the step size of the convolution kernel in the forward propagation;
step S43, adding an auxiliary loss function to guide the training process;
step S44: calculating a weight value and a bias value after the convolutional neural network is updated by adopting a random gradient descent method;
step S45: after N training iterations, the learning rate is adjusted to 10-4Continuing training;
step S46: and stopping training after iteration reaches a preset value, and storing the trained model.
8. The semantic segmentation method under the BiSeNet-based automatic driving scene according to claim 1, wherein the auxiliary loss function is specifically:
Figure FDA0002684479540000031
where L (X; W) is the joint loss function, Lp(X; W) is the main loss function, li(X; W) is an auxiliary loss function, X is the image input, W is the model parameter, X is the model parameteriThe signature graph output in the i-th stage.
9. The semantic segmentation method under the BiSeNet-based automatic driving scene according to claim 1, wherein the step S5 specifically comprises:
step S51: extracting each frame in video information collected by a camera as an input image;
step S52: adjusting the input image to a preset size;
step S53: obtaining a prediction graph from the image obtained in the step S53 through a BiSeNet neural network model;
step S54: scaling the prediction image to obtain the original resolution of the camera as a comparison image;
step S55: and fusing the comparison image and the original image to generate a final result image.
CN202010972176.9A 2020-09-16 2020-09-16 Semantic segmentation method under automatic driving scene based on BiSeNet Active CN112070049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010972176.9A CN112070049B (en) 2020-09-16 2020-09-16 Semantic segmentation method under automatic driving scene based on BiSeNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010972176.9A CN112070049B (en) 2020-09-16 2020-09-16 Semantic segmentation method under automatic driving scene based on BiSeNet

Publications (2)

Publication Number Publication Date
CN112070049A true CN112070049A (en) 2020-12-11
CN112070049B CN112070049B (en) 2022-08-09

Family

ID=73696914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010972176.9A Active CN112070049B (en) 2020-09-16 2020-09-16 Semantic segmentation method under automatic driving scene based on BiSeNet

Country Status (1)

Country Link
CN (1) CN112070049B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598003A (en) * 2020-12-18 2021-04-02 燕山大学 Real-time semantic segmentation method based on data expansion and full-supervision preprocessing
CN113989510A (en) * 2021-12-28 2022-01-28 深圳市万物云科技有限公司 River drainage outlet overflow detection method and device and related equipment
CN114332140A (en) * 2022-03-16 2022-04-12 北京文安智能技术股份有限公司 Method for processing traffic road scene image
CN114821524A (en) * 2022-04-11 2022-07-29 苏州大学 BiSeNet-based rail transit road identification optimization method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189951A1 (en) * 2017-01-04 2018-07-05 Cisco Technology, Inc. Automated generation of pre-labeled training data
CN109101907A (en) * 2018-07-28 2018-12-28 华中科技大学 A kind of vehicle-mounted image, semantic segmenting system based on bilateral segmentation network
CN110188817A (en) * 2019-05-28 2019-08-30 厦门大学 A kind of real-time high-performance street view image semantic segmentation method based on deep learning
CN110827505A (en) * 2019-10-29 2020-02-21 天津大学 Smoke segmentation method based on deep learning
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement
CN111598095A (en) * 2020-03-09 2020-08-28 浙江工业大学 Deep learning-based urban road scene semantic segmentation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189951A1 (en) * 2017-01-04 2018-07-05 Cisco Technology, Inc. Automated generation of pre-labeled training data
CN109101907A (en) * 2018-07-28 2018-12-28 华中科技大学 A kind of vehicle-mounted image, semantic segmenting system based on bilateral segmentation network
CN110188817A (en) * 2019-05-28 2019-08-30 厦门大学 A kind of real-time high-performance street view image semantic segmentation method based on deep learning
CN110827505A (en) * 2019-10-29 2020-02-21 天津大学 Smoke segmentation method based on deep learning
CN111598095A (en) * 2020-03-09 2020-08-28 浙江工业大学 Deep learning-based urban road scene semantic segmentation method
CN111462126A (en) * 2020-04-08 2020-07-28 武汉大学 Semantic image segmentation method and system based on edge enhancement

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598003A (en) * 2020-12-18 2021-04-02 燕山大学 Real-time semantic segmentation method based on data expansion and full-supervision preprocessing
CN113989510A (en) * 2021-12-28 2022-01-28 深圳市万物云科技有限公司 River drainage outlet overflow detection method and device and related equipment
CN113989510B (en) * 2021-12-28 2022-03-11 深圳市万物云科技有限公司 River drainage outlet overflow detection method and device and related equipment
CN114332140A (en) * 2022-03-16 2022-04-12 北京文安智能技术股份有限公司 Method for processing traffic road scene image
CN114821524A (en) * 2022-04-11 2022-07-29 苏州大学 BiSeNet-based rail transit road identification optimization method

Also Published As

Publication number Publication date
CN112070049B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN112070049B (en) Semantic segmentation method under automatic driving scene based on BiSeNet
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN111563909B (en) Semantic segmentation method for complex street view image
CN110503716B (en) Method for generating motor vehicle license plate synthetic data
CN111598030A (en) Method and system for detecting and segmenting vehicle in aerial image
CN114677507A (en) Street view image segmentation method and system based on bidirectional attention network
CN111310593B (en) Ultra-fast lane line detection method based on structure perception
CN109657614B (en) Automatic road identification method in aerial photography road traffic accident scene investigation
CN112862839B (en) Method and system for enhancing robustness of semantic segmentation of map elements
CN113506300A (en) Image semantic segmentation method and system based on rainy complex road scene
Kavitha et al. Pothole and object detection for an autonomous vehicle using yolo
CN114092917A (en) MR-SSD-based shielded traffic sign detection method and system
Jin et al. A semi-automatic annotation technology for traffic scene image labeling based on deep learning preprocessing
CN114898243A (en) Traffic scene analysis method and device based on video stream
CN115115915A (en) Zebra crossing detection method and system based on intelligent intersection
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
Lin et al. A lightweight, high-performance multi-angle license plate recognition model
CN117710764A (en) Training method, device and medium for multi-task perception network
CN111160282A (en) Traffic light detection method based on binary Yolov3 network
CN116071399A (en) Track prediction method and device, model training method and device and electronic equipment
CN116311146A (en) Traffic sign detection method based on deep learning
CN111899283B (en) Video target tracking method
Peng et al. Semantic segmentation model for road scene based on encoder-decoder structure
CN113269088A (en) Scene description information determining method and device based on scene feature extraction
CN114419018A (en) Image sampling method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant