CN114002841B - Depth-of-field synthesis method of intelligent digital microscope - Google Patents

Depth-of-field synthesis method of intelligent digital microscope Download PDF

Info

Publication number
CN114002841B
CN114002841B CN202111219520.8A CN202111219520A CN114002841B CN 114002841 B CN114002841 B CN 114002841B CN 202111219520 A CN202111219520 A CN 202111219520A CN 114002841 B CN114002841 B CN 114002841B
Authority
CN
China
Prior art keywords
image
depth
microscope
lens
field synthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111219520.8A
Other languages
Chinese (zh)
Other versions
CN114002841A (en
Inventor
马常风
杨欣然
吴德胜
郭延文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Shengda Instrument Co ltd
Original Assignee
Ningbo Huasitu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Huasitu Technology Co ltd filed Critical Ningbo Huasitu Technology Co ltd
Priority to CN202111219520.8A priority Critical patent/CN114002841B/en
Publication of CN114002841A publication Critical patent/CN114002841A/en
Application granted granted Critical
Publication of CN114002841B publication Critical patent/CN114002841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The invention discloses a depth of field synthesis method for digital microscope imaging. The method is used for researching a depth-of-field synthesis technology in the using process of a microscope, and on the basis of the theory of the related field, related software algorithms are completed through image processing methods such as spatial domain transformation and the like so as to achieve efficient processing of the lens on the images. Namely, a group of microscope image pictures focused to different areas and heights are given, and the clear imaging of a full picture is automatically synthesized aiming at different depth-of-field areas of an object.

Description

Depth-of-field synthesis method of intelligent digital microscope
Technical Field
The invention relates to the fields of computer graphics, multimedia digital information technology, image processing and the like, in particular to a new generation of intelligent digital microscope depth of field synthesis method.
Background
With the progress of the technology level, the exploration demand of people on the micro world in the scientific research and industrial production process is becoming stronger. The microscope is an important tool for helping people to research and explore the mystery in the micro world at the micro visual level. As early as the 16 th century, microscopes were invented by scientists and mainly used to produce magnified images of objects through certain physical methods by means of optical lenses so that users can better observe and study the imaged objects. Microscopes have been developed for over 400 years to date. With the progress of modern photoelectronic and computer technology, the microscope has higher and higher precision, more and more functions and obviously improved popularity, and becomes an important tool for human to explore the micro field. At present, microscopes and their related technologies are widely used in many fields such as medicine, physics, biology, chemistry and precision manufacturing. In manufacturing, microscopes can be used to visually inspect industrial parts to aid in manufacturing transformation upgrades to service smart manufacturing.
At present, the functions of microscopes produced by most domestic microscope manufacturers still stay at the stage of the traditional optical microscope, and the defects of single function and weak market competitiveness exist. The high-end microscope market at home and abroad is mostly monopolized by international macros such as Japan kenshi and the like. With advances in technology, the microscope industry is facing a revolution from traditional optical microscopes to digital, intelligent microscopes. Compared with a traditional optical microscope, the digital microscope is provided with the camera, and can perform digital image on an observed object to be displayed on the display screen. Because the imaging object may be non-planar, that is, the object region is composed of a plurality of planar regions with different heights, the lens is required to support the depth of field synthesis function presented in a clear manner for each region of the object, and after the user selects the observation range of the object, the microscope lens performs image fusion synthesis on the clear images in different planar regions by continuously adjusting the focal length, so that a full-frame clear object image and a 3D model are generated, the user can observe the imaging object conveniently, and the imaging device is used for scientific experiments, quality detection and other industrial applications.
Disclosure of Invention
The purpose of the invention is as follows: a multifunctional digital microscope depth of field synthesis method is designed and realized, and guidance is provided for an intelligent digital microscope image processing part.
The method comprises the following steps:
step 1, reading a digital image: a user clicks a certain area of the digital image of the object by a mouse to observe the digital image, the microscope continuously moves, and the digital image when the lens moves is read;
step 2, carrying out spatial domain transformation on the digital image: spatial convolution filtering is performed using the laplacian operator.
And step 3: a depth-of-field synthesis module: calculating a definition evaluation function according to the spatial domain filtering information and realizing the depth-of-field synthesis process of the lens by an image processing method;
step 4, a 3D generation module: and generating an enlarged three-dimensional geometric model of the observed object by using the image depth synthesis processing result.
Wherein, step 1 includes the following steps:
step 1-1, setting the width and height of a processing pixel, and taking a group of images with continuously changed definition (namely discrete images of each layer obtained by a microscope lens from top to bottom) as system input;
step 1-2, designing a lens class method to realize a basic image processing function, comprising: initializing, switching clear images (the process of adjusting focal length by a microscope), returning images, returning focal length values, returning pixel values and returning picture sequences; moving a lens in real time according to the mouse operation of a user to acquire an image; the content involved in the step is the default function of the intelligent digital microscope, and is the prior known technology.
And 1-3, providing an interface for a subsequent depth-of-field synthesis module, and quickly calling or modifying various data of the image.
The step 2 comprises the following steps:
step 2-1, acquiring discrete picture groups with different definitions shot by a microscope lens according to an interface of a picture group loading module;
step 2-2, initializing a module and a matrix, readjusting the resolution of the image, and converting the original RGB image into a gray image for subsequent processing;
step 2-3, in the process of spatial domain transformation, the differential definition of the Laplace operator is as follows:
Figure BDA0003312043230000021
by derivation, the above second derivative can be approximated as
Figure BDA0003312043230000022
And
Figure BDA0003312043230000031
thus can obtain
Figure BDA0003312043230000032
A convolution operation can be performed using the digital image matrix and the underlying spatial mask L,
Figure BDA0003312043230000033
the above expression may be implemented at any point (x, y) in the digital image.
The step 3 comprises the following steps:
step 3-1, selecting the range and sequence of the synthesized images in the picture group through keys according to whether the keyboard control enters the depth of field synthesis mode;
and 3-2, successively carrying out Laplace filtering and bilateral filtering operation on the current image to obtain an RGB value of a smoother image, and displaying the RGB value in a small window in real time. The result of the processing can be directly expressed as a gradient matrix denoted G (x, y). Taking the square sum of gradient values of all pixel points in the matrix, a definition evaluation function of depth-of-field synthesis can be formed:
Figure BDA0003312043230000034
among them are:
Figure BDA0003312043230000035
l is the Laplace operator matrix template;
3-3, repeating the filtering operation in the step 3-2 on all selected images to be synthesized, judging and updating in real time through definition, displaying the current synthesis result in a large window, and storing the focal length value information of each block of area;
and 3-4, if no further image processing requirement exists, reselecting the next round of composite image (returning to the step 3-1) or ending the program.
Step 4 comprises the following steps:
step 4-1, extracting the distance from the pixel to the tail end of the microscope lens cone while extracting the clear pixel in the depth of field synthesis, so as to conveniently calculate the height of the pixel, thereby generating a height map after the depth of field synthesis is finished;
step 4-2, removing the salt and pepper noise of the height map by using mean filtering, and smoothing by using Gaussian filtering;
and 4-3, generating three-dimensional points according to the position and the corresponding height of each pixel in the picture so as to obtain 3D data, displaying a 3D effect by adopting a graphical interface, and facilitating three-dimensional and all-directional observation of the object to be detected, thereby serving applications such as intelligent manufacturing, visual detection and the like.
Has the advantages that: the invention has the following remarkable advantages:
(1) The intelligent digital microscope depth of field synthesis algorithm provided by the invention can quickly and accurately obtain depth of field information without manually switching lenses by a user, and has a better image processing result.
(2) The image depth of field synthesis algorithm provided by the invention has the advantages of high processing speed and high efficiency, the average processing speed of the digital image under 1920-1080 resolution is about 0.02s, and the real-time requirement is met.
(3) The digital microscope depth of field synthesis algorithm provided by the invention can directly guide the production of industrial microscope lenses and has a wide application range.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Fig. 1 is a basic flow chart of the depth synthesis algorithm.
Fig. 2a and 2b show the results of focusing two microscopes to different areas in the circuit board image data.
Fig. 3 is a global synthesis result of the depth-of-field synthesis module on the circuit board picture group.
FIG. 4 is a 3D results generation display
Detailed Description
The invention is further explained by the following embodiments in conjunction with the drawings.
The flow chart of the method is shown in figure 1 and is carried out according to the following processes: firstly, acquiring external image information through a microscope lens, initializing an image processing module, and entering a depth-of-field synthesis function according to the requirement of a user; adjusting the resolution of the image, carrying out corresponding gray level transformation, and carrying out Laplace filtering and bilateral filtering operation on the digital image in the image group; calculating and selecting the current optimal focal length in real time according to the definition evaluation function F, and storing the result: after all processing is finished, generating a global depth-of-field synthesis result, and performing 3D generation and checking; and selecting a new group of images to perform depth-of-field synthesis until exiting the module.
Specifically, as shown in fig. 1, the present invention designs and implements a depth of field synthesis algorithm for a new generation of intelligent digital microscope system, which mainly comprises the following steps:
step 1, reading a digital image: a user clicks a certain area of the digital image of the object by a mouse to observe the digital image, the microscope continuously moves, and the digital image when the lens moves is read;
step 2, carrying out space domain transformation on the digital image: spatial convolution filtering is performed using the laplacian operator.
And step 3: a depth-of-field synthesis module: calculating a definition evaluation function according to the spatial domain filtering information and realizing the depth-of-field synthesis process of the lens by an image processing method;
step 4, a 3D generation module: and generating an enlarged three-dimensional geometric model of the observed object by using the image depth synthesis processing result.
For step 1, the detailed implementation details of the image group loading and preparation work are as follows:
step 1-1, setting the width and height of a processing pixel, and taking a group of images with continuously changed definition (namely discrete images of each layer obtained by a microscope lens from top to bottom) as system input;
step 1-2, designing a lens class method to realize a basic image processing function, comprising: initializing, switching clear images (the process of adjusting focal length by a microscope), returning images, returning focal length values, returning pixel values and returning picture sequences; moving a lens in real time according to the mouse operation of a user to acquire an image;
and 1-3, providing an interface for a subsequent depth-of-field synthesis module, and quickly calling or modifying various data of the image.
For step 2, the details of the implementation of the spatial domain transform are as follows:
step 2-1, acquiring discrete picture groups with different definitions shot by a microscope lens according to an interface of the image group loading module;
step 2-2, initializing a module and a matrix, readjusting the resolution of the image, and converting the original RGB image into a gray image for subsequent processing;
step 2-3, in the process of spatial domain transformation, the differential definition of the used Laplace operator is as follows:
Figure BDA0003312043230000051
by derivation, the above second derivative can be approximated as
Figure BDA0003312043230000061
And
Figure BDA0003312043230000062
thus can obtain
Figure BDA0003312043230000063
A convolution operation can be performed using the digital image matrix and the underlying spatial mask L,
Figure BDA0003312043230000064
the above expression may be implemented at any point (x, y) in the digital image.
For step 3, the details of the implementation of the depth-of-field synthesis module are as follows:
step 3-1, selecting the range and sequence of the synthesized images in the picture group through keys according to whether the keyboard control enters the depth of field synthesis mode;
and 3-2, successively carrying out Laplace filtering and bilateral filtering operation on the current image to obtain an RGB value of a smoother image, and displaying the RGB value in a small window in real time. The processing result can be directly expressed as a gradient matrix denoted as G (x, y). Taking the square sum of gradient values of all pixel points in the matrix, a definition evaluation function of depth-of-field synthesis can be formed:
Figure BDA0003312043230000065
among them are:
Figure BDA0003312043230000066
l is the matrix template of the Laplace operator;
3-3, repeating the filtering operation in the step 3-2 on all selected images to be synthesized, judging and updating in real time through definition, displaying the current synthesis result in a large window, and storing the focal length value information of each block of area;
and 3-4, if no further image processing requirement exists, reselecting the next round of composite image (returning to the step 3-1) or finishing the program.
The specific implementation details of the step 4,3d generation module are as follows:
step 4-1, extracting the distance from the pixel to the tail end of the microscope lens cone while extracting the clear pixel in the depth of field synthesis, so as to conveniently calculate the height of the pixel, thereby generating a height map after the depth of field synthesis is finished;
step 4-2, removing the salt and pepper noise of the height map by using mean filtering, and smoothing by using Gaussian filtering;
and 4-3, generating three-dimensional points according to the position and the corresponding height of each pixel in the picture so as to obtain 3D data, displaying a 3D effect by adopting a graphical interface, and facilitating three-dimensional and all-directional observation of the object to be detected, thereby serving applications such as intelligent manufacturing, visual detection and the like.
The pictures inputted into the test picture group are shown in fig. 2a and fig. 2b, and the circuit board picture group used for testing in the present embodiment has 27 pictures, which are all the results of focusing the same object to different areas by the microscope lens. After the depth synthesis function is turned on, the final global synthesis result can be displayed after the 27 pictures are completely processed, as shown in fig. 3, each part of the picture is very clear. From the saved focus values a 3D model of the circuit board as shown in fig. 4 can be generated.
The present invention provides a design method of a new generation of intelligent digital microscope system depth of field synthesis algorithm, and a plurality of methods and approaches for implementing the technical scheme, and the above description is only a preferred embodiment of the present invention, it should be noted that, for those skilled in the art, a plurality of improvements and modifications can be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (1)

1. A method for synthesizing the depth of field of an intelligent digital microscope is characterized by comprising the following steps:
step 1, reading a digital image: a user clicks a certain area of the digital image of the object by a mouse to observe the digital image, the microscope continuously moves, and the digital image when the lens moves is read;
step 2, carrying out spatial domain transformation on the digital image: performing spatial convolution filtering by using a Laplace operator;
and step 3: a depth-of-field synthesis module: calculating a definition evaluation function according to the spatial convolution filtering information and realizing the depth-of-field synthesis process of the lens by an image processing method;
step 4, a 3D generation module: generating an amplified three-dimensional geometric model of the observed object by using the image depth-of-field synthesis processing result;
step 1 comprises the following steps:
step 1-1, setting the width and height of a processing pixel, and taking a group of images with continuously changed definition, namely discrete images of each layer obtained by a microscope lens from top to bottom as system input;
step 1-2, designing a lens class method to realize an image processing function, comprising: initializing, switching a clear image, namely a process of adjusting a focal length by a microscope, returning the image, returning a focal length value, returning a pixel value and returning a picture sequence; moving a lens in real time according to the mouse operation of a user to acquire an image;
step 1-3, providing an interface for a subsequent depth-of-field synthesis module for rapidly calling or modifying image data;
the step 2 comprises the following steps:
step 2-1, acquiring discrete picture groups with different definitions shot by a microscope lens according to the interface in the step 1-3;
step 2-2, initializing a module and a matrix, readjusting the resolution of the image, and converting the original RGB image into a gray image for subsequent processing;
step 2-3, in the process of spatial domain transformation, the differential definition of the Laplace operator is as follows:
Figure FDA0003759470660000011
(x, y) represents the coordinates of an arbitrary point on the image, and f (x, y) represents the gray scale value of the point, according to a discretized approximation:
Figure FDA0003759470660000012
the digital image matrix is convolved using the following spatial mask L,
Figure FDA0003759470660000021
the step 3 comprises the following steps:
step 3-1, selecting the range and sequence of the synthetic images in the digital image when the lens moves in the depth of field synthetic mode;
step 3-2, performing Laplace filtering and bilateral filtering operation on the current image in sequence to obtain an RGB value of a smoother image; the processing result is expressed as a gradient matrix, and is marked as G (x, y); taking the square sum of gradient values of each pixel point in the matrix to form a depth-of-field synthesized definition evaluation function F Definition of
F Definition of =∑ xy G 2 (x,y),
Among them are:
Figure FDA0003759470660000022
l is the space mask L in the step 2-3;
3-3, repeating the filtering operation in the step 3-2 on all selected images to be synthesized, judging and updating in real time through definition, and storing the focal length value information of each block of area;
step 3-4, if no further image processing requirement exists, reselecting the next round of synthetic image;
step 4 comprises the following steps:
step 4-1, extracting the distance from the pixel to the tail end of a microscope lens cone while extracting clear pixel points through depth-of-field synthesis, and calculating the height of the pixel point, so that a height map is generated after the depth-of-field synthesis is finished;
step 4-2, removing noise of the height value of the height map by using mean filtering, and then smoothing by using Gaussian filtering;
and 4-3, generating three-dimensional points according to the position and the corresponding height of each pixel in the picture so as to obtain 3D data, and displaying a 3D effect by adopting a graphical interface.
CN202111219520.8A 2021-10-20 2021-10-20 Depth-of-field synthesis method of intelligent digital microscope Active CN114002841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111219520.8A CN114002841B (en) 2021-10-20 2021-10-20 Depth-of-field synthesis method of intelligent digital microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111219520.8A CN114002841B (en) 2021-10-20 2021-10-20 Depth-of-field synthesis method of intelligent digital microscope

Publications (2)

Publication Number Publication Date
CN114002841A CN114002841A (en) 2022-02-01
CN114002841B true CN114002841B (en) 2022-10-04

Family

ID=79923444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111219520.8A Active CN114002841B (en) 2021-10-20 2021-10-20 Depth-of-field synthesis method of intelligent digital microscope

Country Status (1)

Country Link
CN (1) CN114002841B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005022125A1 (en) * 2005-05-12 2006-11-16 Carl Zeiss Microlmaging Gmbh Light pattern microscope with auto focus mechanism, uses excitation or detection beam path with auto focus for detecting position of focal plane
CN100429551C (en) * 2005-06-16 2008-10-29 武汉理工大学 Composing method for large full-scene depth picture under microscope
CN113852761B (en) * 2021-09-27 2023-07-04 宁波华思图科技有限公司 Automatic focusing method for intelligent digital microscope

Also Published As

Publication number Publication date
CN114002841A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
Gallo et al. 3D reconstruction of small sized objects from a sequence of multi-focused images
CN112067233B (en) Six-degree-of-freedom motion capture method for wind tunnel model
CN110211182A (en) A kind of LCD backlight vision positioning method based on Gray-scale Matching and objective contour
CN109064505B (en) Depth estimation method based on sliding window tensor extraction
CN110517213B (en) Laplacian pyramid-based real-time depth of field extension method for microscope
CN110415332A (en) Complex textile surface three dimensional reconstruction system and method under a kind of non-single visual angle
CN111398274B (en) Small target object 3D collection equipment
EP3420393A1 (en) System for generating a synthetic 2d image with an enhanced depth of field of a biological sample
CN111080776A (en) Processing method and system for human body action three-dimensional data acquisition and reproduction
CN114612352A (en) Multi-focus image fusion method, storage medium and computer
CN112163996A (en) Flat-angle video fusion method based on image processing
CN111179173A (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN114002841B (en) Depth-of-field synthesis method of intelligent digital microscope
CN111881925B (en) Significance detection method based on camera array selective light field refocusing
Tseng et al. Depth image super-resolution via multi-frame registration and deep learning
CN113852761B (en) Automatic focusing method for intelligent digital microscope
CN111443475A (en) Method and device for automatically positioning and scanning slide by objective lens
CN115060367A (en) Full-glass data cube acquisition method based on microscopic hyperspectral imaging platform
RU2647645C1 (en) Method of eliminating seams when creating panoramic images from video stream of frames in real-time
Peng et al. Selective bokeh effect transformation
Vahabi et al. Automated drawing tube (camera lucida) method in light microscopy images analysis can comes true
CN114384681A (en) Rapid and accurate automatic focusing method and system for microscope, computer equipment and medium
CN113204107A (en) Three-dimensional scanning microscope with double objective lenses and three-dimensional scanning method
Li et al. Overall well-focused catadioptric image acquisition with multifocal images: a model-based method
CN113379663B (en) Space positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240618

Address after: No. 1, Lujiahe Road, Lanjiang Street, Yuyao City, Ningbo City, Zhejiang Province 315400

Patentee after: Ningbo Shengda Instrument Co.,Ltd.

Country or region after: China

Address before: 315400 Yuyao City Economic Development Zone, Ningbo City, Zhejiang Province

Patentee before: Ningbo huasitu Technology Co.,Ltd.

Country or region before: China