CN105488845B - Generate the method and its electronic device of 3-D view - Google Patents

Generate the method and its electronic device of 3-D view Download PDF

Info

Publication number
CN105488845B
CN105488845B CN201410474170.3A CN201410474170A CN105488845B CN 105488845 B CN105488845 B CN 105488845B CN 201410474170 A CN201410474170 A CN 201410474170A CN 105488845 B CN105488845 B CN 105488845B
Authority
CN
China
Prior art keywords
image
burnt
profile
sharpness
reference picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410474170.3A
Other languages
Chinese (zh)
Other versions
CN105488845A (en
Inventor
丁奎评
杨朝光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201410474170.3A priority Critical patent/CN105488845B/en
Publication of CN105488845A publication Critical patent/CN105488845A/en
Application granted granted Critical
Publication of CN105488845B publication Critical patent/CN105488845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention provides a kind of method and its electronic device generating 3-D view.The method includes:Obtain the multiple images for corresponding to multiple burnt sections, wherein have between the multiple coke section multiple burnt segment differences away from;Select reference picture from described multiple images, and using reference picture as three dimensions in three-dimensional reference planes;Edge detection is carried out to each image according to sharpness reference value, to find out at least profile corresponding to sharpness reference value in each image;In three dimensions, be based on each burnt segment difference away from and three-dimensional reference planes arrange each image;And interpolating operations are executed between an at least profile for each image to generate 3-D view.

Description

Generate the method and its electronic device of 3-D view
Technical field
The invention relates to a kind of methods and its electronic device generating image, and generating three in particular to a kind of Tie up the method and its electronic device of image.
Background technology
In the modern life, the various intellectual products with camera function are already in for people's lives indispensable one Part.In order to meet the increasingly increased demand of taking pictures of consumer, existing multiple commercial vendors be dedicated to researching and developing it is various take pictures and image at Application program is managed, for example U.S. flesh, special efficacy, additional textures, conversion photo situation are respectively provided with and is converted to two dimensional image The functions such as 3-D view.
Two dimensional image is converted into 3-D view function existing, it generally need to be by be arranged on intellectual product two Camera lens comes while shooting two photos, then generates 3-D view based on this two photos, but such mechanism and can not be suitable for The only product with single lens.
In addition, for allowing the mode that only product with single lens generates 3-D view to be then by translation in existing Mode allows product in different viewing angles multiple pictures, then simulates parallax between eyes by the horizontal distance difference between photo, And then corresponding generation 3-D view.However, such mode of operation is for user and inconvenient.
Invention content
In view of this, the present invention proposes a kind of method and its electronic device generating 3-D view, can be based on corresponding to The multiple pictures of different coke sections generate 3-D view, thus user can be allowed simply to be taken with the product only with single lens Obtain 3-D view.
The present invention provides a kind of method generating 3-D view, is suitable for electronic device.The method includes:Acquisition corresponds to The multiple images of multiple coke sections, wherein have between the multiple coke section multiple burnt segment differences away from;It is selected from described multiple images Reference picture, and using reference picture as three dimensions in three-dimensional reference planes;According to sharpness reference value to each image into Row edge detection, to find out at least profile corresponding to sharpness reference value in each image;In three dimensions, it is based on each Burnt segment difference away from and three-dimensional reference planes arrange each image;And between an at least profile for each image execute interpolating operations with Generate 3-D view.
The present invention provides a kind of electronic device, for generating 3-D view.The electronic device includes taking unit, storage Unit and processing unit.Storage unit stores multiple modules.Processing unit connects taking unit and storage unit, access And execute the multiple module.The multiple module includes acquisition module, Choosing module, detection module, arrangement module and production Raw module.Acquisition module controls taking unit and obtains the multiple images for corresponding to multiple burnt sections, wherein between the multiple coke section With multiple burnt segment differences away from.Choosing module selects reference picture from described multiple images, and using reference picture as three-dimensional space Between in three-dimensional reference planes.Detection module carries out edge detection according to sharpness reference value to each image, in each image Find out at least profile corresponding to sharpness reference value.Arrange module in three dimensions, based on each burnt segment difference away from and three It ties up reference planes and arranges each image.Generation module executes interpolating operations to generate graphics between an at least profile for each image Picture.
Based on above-mentioned, the method and its electronic device of generation 3-D view that the embodiment of the present invention proposes can obtain correspondence After the multiple images of different burnt sections, these images are subjected to arrangement appropriate in three dimensions according to these coke sections.It connects , electronic device can execute edge detection to each image and find out the profile in each image, and the profile in each image it Between execute interpolating operations, and then generate the 3-D view corresponding to acquired multiple images.
To make the foregoing features and advantages of the present invention clearer and more comprehensible, special embodiment below, and it is detailed to coordinate attached drawing to make Carefully it is described as follows.
Description of the drawings
Fig. 1 is the electronic device schematic diagram shown in one embodiment of the invention;
Fig. 2 is the flow chart of the generation 3-D view method shown in one embodiment of the invention;
Fig. 3 A to Fig. 3 F are the generation 3-D view schematic diagrames shown in one embodiment of the invention.
Reference sign:
100:Electronic device;
110:Taking unit;
120:Storage unit;
121:Acquisition module;
122:Choosing module;
123:Detection module;
124:Arrange module;
125:Generation module;
130:Processing unit;
140:Gyroscope;
310:Reference contours;
320:The first profile;
330:Second profile;
D1:First burnt segment difference away from;
D2:Second burnt segment difference away from;
DI’:It is specific coke segment difference away from;
I1:First image;
I2:Second image;
S210~S250:Step;
RI:Reference picture.
Specific implementation mode
Fig. 1 is the electronic device schematic diagram shown in one embodiment of the invention.In the present embodiment, electronic device 100 can To be smart mobile phone, tablet computer, personal digital assistant, laptop (Notebook PC) or other similar devices.Electricity Sub-device 100 includes taking unit 110, storage unit 120 and processing unit 130.
Taking unit 110 can be it is any have charge coupled cell (Charge coupled device, CCD) camera lens, Complementary metal oxide semiconductor (Complementary metal oxide semiconductor transistors, CMOS) the video camera of camera lens or infrared ray camera lens can also be the image acquisition equipment that can obtain depth information, e.g. deeply Spend video camera (depth camera) or stereoscopic camera.Storage unit 120 be, for example, memory, hard disk or other it is any can Element for storing data, and can be used to record multiple modules.
Processing unit 130 couples taking unit 110 and storage unit 120.Processing unit 130 can be that general service is handled Device, special purpose processors, traditional processor, digital signal processor, multi-microprocessor (microprocessor), one The microprocessor of a or multiple combined digital signal processor cores, controller, microcontroller, special application integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA), the integrated circuit of any other type, state machine, be based on advanced reduced instruction Collect the processor of machine (Advanced RISC Machine, ARM) and similar product.
In the present embodiment, processing unit 130 can access acquisition module 121, the Choosing module that storage unit 120 is stored 122, detection module 123, arrangement module 124 and generation module 125 are to execute generation 3-D view method proposed by the present invention Each step.
Fig. 2 is the flow chart of the generation 3-D view method shown in one embodiment of the invention.Fig. 3 A to Fig. 3 F are foundations Generation 3-D view schematic diagram shown in one embodiment of the invention.The method of the present embodiment can be held by the electronic device 100 of Fig. 1 Row is the element of collocation Fig. 1 below to illustrate the detailed step of this method.
In step S210, acquisition module 121 can control taking unit 110 to obtain the multiple figures for corresponding to multiple burnt sections Picture.Specifically, taking unit 110 can obtain multiple images according to different burnt sections to Same Scene.In addition, in order to ensure this The instantaneity of the method for invention on the implementation, the time that taking unit 110 obtains described multiple images can be fitted by designer When adjustment, such as 5 images etc. are obtained in one second.It is to be understood that when the capture speed of electronic device 100 is higher, The amount of images that taking unit 110 can obtain is higher.That is, the quantity of described multiple images is proportional to electronic device 100 Capture speed, but the present invention embodiment it is without being limited thereto.
In step S220, Choosing module 122 can select reference picture from described multiple images, and be made with reference picture For the three-dimensional reference planes in three dimensions.There is the maximum in the multiple burnt section in reference picture described multiple images in this way The image of burnt section.In other words, Choosing module 122 can be used a most clear image and be used as with reference to image (most because of its burnt section Greatly), but the embodiment of the present invention is without being limited thereto.The three dimensions may be characterized as X-axis, Y-axis and Z axis, and select mould Reference picture can be for example labelled to the X-Y plane in this three dimensions by block 122, to define the three-dimensional reference planes.
As shown in Figure 3A, it is, for example, schematic diagram that reference image R I is labelled to after X-Y plane by Choosing module 122.Alternatively, In other embodiments, it is three-dimensional with reference to flat to define can be also labelled to any plane in three dimensions by designer for reference picture Face.
In step S230, detection module 123 can carry out edge detection according to sharpness reference value to each image, with each At least profile corresponding to sharpness reference value is found out in image.The sharpness reference value is, for example, between 0 to 1 Numerical value (such as 0.3) can be decided in its sole discretion by designer according to demand.After determining sharpness reference value, detection module 123 Corresponding profile can be found out in each image according to this.
Assuming that described multiple images include the first image, and this first image includes multiple pixels.The multiple picture Element includes the first pixel and the second pixel adjacent to the first pixel, and the first pixel and the second pixel are respectively provided with first Grayscale value and the second grayscale value.Idea of the invention for convenience of description, in the following pages, described first image are all assumed For the image with the first burnt section, the described first burnt section is only second to the burnt section of maximum of reference picture, and the first burnt section and maximum are burnt Have between section the first burnt segment difference away from.
When detection module 123 finds out the profile for wherein corresponding to sharpness reference value for the first image, for each For adjacent the first pixel and the second pixel, detection module 123 calculates the difference between the first grayscale value and the second grayscale value Away from.Also, when this gap is more than predetermined threshold value (being, for example, 30%), detection module 123 defines the first pixel and second One of pixel is the contour pixel of the first image.That is, when detection module 123 detects the ash between adjacent pixel When rank value is widely varied, detection module 123 can determine whether where there is boundaries, and one of pixel (is e.g. had Have the pixel of higher gray scale value) it is defined as contour pixel.Later, detection module 123 can find out profile all in the first image Pixel, and one or more the first profiles in the first image are defined according to this.For example, detection module 123 can by adjacent or Neighbouring contour pixel is connected as the profile, but the embodiment of the present invention is without being limited thereto.
For other images except the first image, those skilled in the art should can be according to above-mentioned teaching and at other The profile for corresponding to sharpness reference value in each image is found out in image, details are not described herein.Fig. 3 B are please referred to, for the ease of saying Bright, the profile found out in reference image R I may be characterized as reference contours 310.
Later, in step S240, arrangement module 124 can be based in three dimensions each burnt segment difference away from and three-dimensional reference Each image of planar alignment.Specifically, as shown in Figure 3 C, the first image I1 can be parallel to and be referred to by arrangement module 124 Image RI is at a distance of the first first position of the burnt segment difference away from D1, wherein the first image I1 after arrangement is aligned in reference image R I.It answers It is appreciated that, the first profile 320 found out by detection module 123 is may also comprise in the first image I1.
Assuming that further include corresponding to the second image of the second burnt section (being less than the first burnt section) in described multiple images, and second There is the second burnt segment difference away from then arranging module 124 and can be based more on above-mentioned mechanism and arrange the second image between burnt section and the first burnt section It is listed in three dimensions.
Fig. 3 D are please referred to, the second image I2 can be parallel to burnt at a distance of second with the first image I1 by arrangement module 124 The second position of the segment difference away from D2, wherein the second image I2 after arrangement is aligned in the first image I1.As shown in Figure 3D, the first image I1 and the second image I2 is located at the same side of reference image R I, and the specific burnt section between the second image I2 and reference image R I Gap DI ' is the first burnt segment difference away from burnt summations of the segment difference away from D2 of D1 and second.It is to be understood that can also be wrapped in the second image I2 Include the second profile 330 found out by detection module 123.
Referring once again to Fig. 2, in step s 250, generation module 125 can execute between an at least profile for each image Interpolating operations are to generate 3-D view.Please refer to Fig. 3 E, it is assumed that reference contours 310, the first profile 320 and the second profile 330 All correspond to the same object (such as a mountain) in scene, then generation module 125 can be in the first profile 320 and reference contours Interpolating operations are executed between 310 to connect the first profile 320 and reference contours 310, and in the second profile 330 and the first round Interpolating operations are executed between exterior feature 320 to connect the second profile 330 and the first profile 320.
In short, the burnt section corresponding to each image can be converted to the letter of the Z axis height in three dimensions by electronic device 100 Breath (that is, each coke segment difference away from), and then arrange each image to appropriate in three dimensions according to these Z axis elevation informations Position.Then, electronic device 100 can execute interpolating operations between the profile in each image, and then generate such as Fig. 3 E institutes The 3-D view shown.
It is to be understood that due to for determining that the reference image R I of three-dimensional reference planes is the image with maximum burnt section, Therefore when the 3-D view in Fig. 3 E, which is presented to user, to be watched, electronic device 100 should be using negative Z-direction as 3-D view Top (as illustrated in Figure 3 F), rather than as shown in FIGURE 3 E using positive Z-direction as the top of 3-D view, but the present invention can be real It is without being limited thereto to apply mode.
In other embodiments, electronic device 100 can further include the gyroscope 140 for being connected to processing unit 130.Therefore, Processing unit 130 can be according to the sensing signal rotated three dimensional image of gyroscope 140.Thus, user watch this three Visual effect caused by 3-D view can be further experienced when tieing up image.
In conclusion the method and its electronic device of the generation 3-D view that the embodiment of the present invention proposes can obtain correspondence After the multiple images of different burnt sections, these images are subjected to arrangement appropriate in three dimensions according to these coke sections.It connects , electronic device can execute edge detection to each image and find out the profile in each image, and the profile in each image it Between execute interpolating operations, and then generate the 3-D view corresponding to acquired multiple images.Even if thus, electronic device It is only configured with single a taking unit, electronic device still can smoothly and advantageously generate 3-D view, it is thus possible to provide use Person is different from previous user's experience.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Present invention has been described in detail with reference to the aforementioned embodiments for pipe, it will be understood by those of ordinary skill in the art that:Its according to So can with technical scheme described in the above embodiments is modified, either to which part or all technical features into Row equivalent replacement;And these modifications or replacements, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (8)

1. a kind of method generating 3-D view, is suitable for electronic device, which is characterized in that including:
Obtain the multiple images for corresponding to multiple burnt sections, wherein have between those burnt sections multiple burnt segment differences away from;
Reference picture is selected from those images, and using the reference picture as the three-dimensional reference planes in three dimensions, wherein Those images correspond to Same Scene, and the reference picture has the burnt section of maximum in those burnt sections;
Edge detection is carried out to the respectively image according to sharpness reference value, is joined corresponding to the sharpness with being found out in the respectively image An at least profile for value is examined, wherein those images include the first image for corresponding to the first burnt section, the first burnt section and the maximum Have the first burnt segment difference away from the reference picture includes the reference contours corresponding to the sharpness reference value between burnt section;
In the three dimensions, by first image be parallel to the reference picture at a distance of the first burnt segment difference away from first Position be based on respectively the coke segment difference away from and each image of three-dimensional reference planes arrangement, wherein first image pair after arrangement The Qi Yu reference pictures;And
Interpolating operations are executed between an at least profile for the respectively image to generate 3-D view.
2. according to the method described in claim 1, it is characterized in that, those images further include the second figure for corresponding to the second burnt section Picture, have between the second burnt section and the first burnt section the second burnt segment difference away from, and by first image be parallel to should Reference picture at a distance of the first burnt segment difference away from the first position the step of after, further include:
By second image be parallel to first image at a distance of the second burnt segment difference away from the second position, wherein after arrangement Second image alignment in first image,
Wherein, first image and second image are located at the same side of the reference picture, and second image and the reference Specific burnt segment difference between image away from for the first burnt segment difference away from and the second burnt segment difference away from summation.
3. according to the method described in claim 1, it is characterized in that, first image includes corresponding to the sharpness reference value The first profile, the reference picture include the reference contours corresponding to the sharpness reference value, the first profile and the reference contours Corresponding to the first object, and the interpolating operations are executed to generate the 3-D view between an at least profile for the respectively image Step includes:
The interpolating operations are executed between the first profile and the reference contours is taken turns with connecting the first profile and the reference It is wide.
4. according to the method described in claim 3, it is characterized in that, those images further include the second image, the second image packet Include the second profile corresponding to the sharpness reference value, which corresponds to first object, wherein connect this first After the step of profile and the reference contours, further include:
The interpolating operations are executed between second profile and the first profile to connect second profile and the first round It is wide.
5. according to the method described in claim 1, it is characterized in that, the quantity of those images is proportional to the capture of the electronic device Speed.
6. according to the method described in claim 1, it is characterized in that, those images include the first image, which includes Multiple pixels, those pixels include the first pixel and the second pixel adjacent to first pixel, which has the One grayscale value, which has the second grayscale value, and carries out the edge inspection to the respectively image according to the sharpness reference value It surveys, includes the step of at least profile corresponding to the sharpness reference value to be found out in the respectively image:
Calculate the gap between first grayscale value and second grayscale value;
When the gap is more than predetermined threshold value, it is first figure to define one of first pixel and second pixel The contour pixel of picture;And
The contour pixel all in first image is found out, and defines at least profile in first image according to this.
7. according to the method described in claim 1, it is characterized in that, after the step of generating the 3-D view, further include:
The sensing signal of gyroscope according to the electronic device rotates the 3-D view.
8. a kind of electronic device, for generating 3-D view, which is characterized in that including:
Taking unit;
Storage unit stores multiple modules;And
Processing unit connects the taking unit and the storage unit, accesses and execute those modules, those modules include:
Acquisition module controls the taking unit and obtains the multiple images for corresponding to multiple burnt sections, wherein has between those burnt sections It is multiple coke segment differences away from;
Choosing module selects reference picture from those images, and using the reference picture as the three-dimensional reference in three dimensions Plane, wherein those images correspond to Same Scene, and the reference picture has the burnt section of maximum in those burnt sections;
Detection module carries out edge detection to the respectively image according to sharpness reference value, is corresponded to being found out in the respectively image An at least profile for the sharpness reference value, wherein those images include the first image for corresponding to the first burnt section, first coke Have the first burnt segment difference away from the reference picture includes the reference wheel corresponding to the sharpness reference value between section and the burnt section of maximum It is wide;
Module is arranged, in the three dimensions, which is parallel to the reference picture at a distance of the first burnt section The first position of gap be based on respectively the coke segment difference away from and each image of three-dimensional reference planes arrangement, wherein after arrangement should First image alignment is in the reference picture;And
Generation module executes interpolating operations to generate 3-D view between an at least profile for the respectively image.
CN201410474170.3A 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view Active CN105488845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410474170.3A CN105488845B (en) 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410474170.3A CN105488845B (en) 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view

Publications (2)

Publication Number Publication Date
CN105488845A CN105488845A (en) 2016-04-13
CN105488845B true CN105488845B (en) 2018-09-25

Family

ID=55675809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410474170.3A Active CN105488845B (en) 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view

Country Status (1)

Country Link
CN (1) CN105488845B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107452008A (en) * 2016-06-01 2017-12-08 上海东方传媒技术有限公司 Method for detecting image edge and device
CN106446908A (en) * 2016-08-31 2017-02-22 乐视控股(北京)有限公司 Method and device for detecting object in image
CN111179291B (en) * 2019-12-27 2023-10-03 凌云光技术股份有限公司 Edge pixel point extraction method and device based on neighborhood relation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101008104A (en) * 2006-12-28 2007-08-01 西安理工大学 Melt liquid level position detecting method for CZ method monocrystalline silicon growth
CN101858741A (en) * 2010-05-26 2010-10-13 沈阳理工大学 Zoom ranging method based on single camera
CN102204263A (en) * 2008-11-03 2011-09-28 微软公司 Converting 2D video into stereo video
CN103379267A (en) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 Three-dimensional space image acquisition system and method
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
CN103782234A (en) * 2011-09-09 2014-05-07 富士胶片株式会社 Stereoscopic image capture device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014154907A (en) * 2013-02-05 2014-08-25 Canon Inc Stereoscopic imaging apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101008104A (en) * 2006-12-28 2007-08-01 西安理工大学 Melt liquid level position detecting method for CZ method monocrystalline silicon growth
CN102204263A (en) * 2008-11-03 2011-09-28 微软公司 Converting 2D video into stereo video
CN101858741A (en) * 2010-05-26 2010-10-13 沈阳理工大学 Zoom ranging method based on single camera
CN103782234A (en) * 2011-09-09 2014-05-07 富士胶片株式会社 Stereoscopic image capture device and method
CN103379267A (en) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 Three-dimensional space image acquisition system and method
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode

Also Published As

Publication number Publication date
CN105488845A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
EP3248374B1 (en) Method and apparatus for multiple technology depth map acquisition and fusion
US11494929B2 (en) Distance measurement device
US20180014003A1 (en) Measuring Accuracy of Image Based Depth Sensing Systems
US10645364B2 (en) Dynamic calibration of multi-camera systems using multiple multi-view image frames
CN102572492B (en) Image processing device and method
US10063840B2 (en) Method and system of sub pixel accuracy 3D measurement using multiple images
US20180315213A1 (en) Calibrating texture cameras using features extracted from depth images
KR20150080003A (en) Using motion parallax to create 3d perception from 2d images
US20150310620A1 (en) Structured stereo
CN109661815B (en) Robust disparity estimation in the presence of significant intensity variations of the camera array
CN104205825B (en) Image processing apparatus and method and camera head
WO2023169283A1 (en) Method and apparatus for generating binocular stereoscopic panoramic image, device, storage medium, and product
CN105488845B (en) Generate the method and its electronic device of 3-D view
TWI549478B (en) Method for generating 3d image and electronic apparatus using the same
US9807362B2 (en) Intelligent depth control
US9918015B2 (en) Exposure control using depth information
US20210035317A1 (en) Efficient sub-pixel disparity estimation for all sub-aperture images from densely sampled light field cameras
CN104270627A (en) Information processing method and first electronic equipment
CN113890984B (en) Photographing method, image processing method and electronic equipment
CN108431867A (en) A kind of data processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant