CN111045286A - Projection method and system based on double-folding screen field and double-folding screen field - Google Patents
Projection method and system based on double-folding screen field and double-folding screen field Download PDFInfo
- Publication number
- CN111045286A CN111045286A CN201811185483.1A CN201811185483A CN111045286A CN 111045286 A CN111045286 A CN 111045286A CN 201811185483 A CN201811185483 A CN 201811185483A CN 111045286 A CN111045286 A CN 111045286A
- Authority
- CN
- China
- Prior art keywords
- projection
- azimuth
- virtual
- viewing
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
- G03B37/04—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
- G06F3/1446—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The invention discloses a projection method and system based on a double-folding screen field and the double-folding screen field, which relate to the technical field of image processing, and the method comprises the following steps: acquiring position reference information corresponding to a viewing position; calculating azimuth viewing angles of the viewing position in two azimuths by combining the position reference information; the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle; fusing the virtual pictures of the two azimuth viewing angles into a scene picture for watching the virtual scene at the watching position, and projecting the scene picture to a corresponding double-folding screen field; the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable. The two-folding screen field can change the size of a projection area according to actual requirements, and has high adaptability; and the projection effect changes along with the change of the watching position, so that a viewer really watches the stereoscopic scene image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a projection method and system based on a double-folding screen field and the double-folding screen field.
Background
At present, the use of multi-screen fusion is mostly limited to plane fusion, i.e. multiple screens are commonly connected and fused for imaging. The stereoscopic space multi-screen fusion imaging is less in application, and three-dimensional software is used for manual splicing in the fusion imaging process and is often used in occasions such as commercial performances; such as furniture displays, floor type displays, and the like.
During display, the real three-dimensional space adopts a six-sided space structure which is the same as a room and has a fixed space size, and the size of the space can not be changed to meet the requirements of different users for experiencing different space sizes to image and view the film; and sometimes, projection does not need to use all six walls, which causes waste of space.
In addition, the multi-screen fusion imaging in the three-dimensional space has the condition that the watching visual angle is fixed, and the watching visual angle of a watcher cannot be updated in real time along with the position change of the watcher. When the viewing angle is fixed, and a viewer stands at another position to watch, the displayed stereoscopic scene image has distortion and other phenomena, so that the watching feeling of the viewer is influenced, and the viewer cannot really watch the stereoscopic scene image.
Disclosure of Invention
The invention aims to provide a projection method and system based on a double-folding screen field and the double-folding screen field, so that a viewer can view a stereoscopic scene image in real time, and the use experience of the user is improved.
The technical scheme provided by the invention is as follows:
a projection method based on a double-folding screen field comprises the following steps: acquiring position reference information corresponding to a viewing position; calculating azimuth viewing angles of the viewing position in two azimuths by combining the position reference information; the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle; fusing the virtual pictures of the two azimuth viewing angles into a scene picture for watching the virtual scene at the watching position, and projecting the scene picture to a corresponding double-folding screen field; the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
In the technical scheme, the size (of the projection area) of the two-fold screen field is adjustable, and different requirements can be met. And the projected scene picture can change along with the change of the watching position, thereby improving the effect of watching the three-dimensional scene image by the watcher.
Further preferably, the calculating the azimuth viewing angles of the viewing position at two azimuths by combining the position reference information includes: and respectively calculating the azimuth viewing angles of the viewing position in the two azimuths by combining the position reference information and the viewing angle calculation formulas of the two azimuths.
Further preferably, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, the position reference information is cut into a corresponding virtual frame according to the azimuth viewing angle corresponding to the X-axis in the virtual scene; and calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Further preferably, the generating, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle includes: when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene; and calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
Further preferably, the generating, by the virtual scene according to each azimuth angle, a virtual picture corresponding to each azimuth angle includes: when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles; and cutting out a corresponding virtual picture in the virtual scene according to each azimuth visual angle and the cutting area.
Further preferably, the calculating the cropping area corresponding to the orientation view specifically includes: calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction; and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
Further preferably, the calculating of the cropping area corresponding to the azimuth viewing angle specifically includes: and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
The invention also provides a two-fold screen field, comprising: two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
Further preferred, comprising: the projection surface is a movable wall surface, and the projection size of the projection surface is changed in a moving mode; or; the projection screen is a foldable and contractible movable wall surface, and the projection size of the projection surface is changed in a folding and contraction mode; or; the projection surface is a plurality of pieces of curtain cloth capable of being pulled down, and the projection size of the projection surface is changed in a folding and unfolding mode.
The invention also provides a projection system based on the double-folding screen field, which comprises: the system comprises intelligent equipment, projection equipment and a two-fold screen field; the smart device includes: the acquisition module is used for acquiring position reference information corresponding to the watching position; the calculation module is used for calculating the azimuth viewing angles of the watching positions in two azimuths by combining the position reference information; the image generation module is used for generating a virtual image corresponding to each azimuth angle according to each azimuth angle in the virtual scene; the image fusion module is used for fusing the virtual images of the two azimuth visual angles into a scene image for watching the virtual scene at the watching position; the projection equipment projects the scene picture to the two-fold screen field, and the inner side of the two-fold screen field forms the viewing position; the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
In the technical scheme, the size (of the projection area) of the two-fold screen field is adjustable, and different requirements can be met. And the projected scene picture can change along with the change of the watching position, thereby improving the effect of watching the three-dimensional scene image by the watcher.
Compared with the prior art, the projection method and system based on the double-folding screen field, and the double-folding screen field have the advantages that:
the two-folding screen field can change the size of a projection area according to actual requirements, and has high adaptability; and the projection effect changes along with the change of the watching position, so that a viewer really watches the stereoscopic scene image.
Drawings
The above features, technical features, advantages and implementations of a two-fold screen venue will be further described in the following description of preferred embodiments in a clearly understandable manner with reference to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a two-fold screen yard based projection method of the present invention;
FIG. 2 is a flow chart of another embodiment of a projection method based on a double-folding screen field according to the present invention;
FIG. 3 is a flow chart of a projection method based on two-fold screen field according to another embodiment of the present invention;
FIG. 4 is a block diagram illustrating the structure of one embodiment of a projection system based on a two-fold screen field according to the present invention;
FIG. 5 is a block diagram illustrating the structure of an intelligent device in the present invention;
FIG. 6 is a schematic view of the wall surface of the present invention;
FIG. 7 is a schematic view of the viewing angle at various orientations of a viewpoint/viewing position in accordance with the present invention;
FIG. 8 is a schematic view of the viewing angle at various orientations of another viewpoint/viewing position in accordance with the present invention;
FIG. 9 is a schematic view of a perspective of another viewpoint/viewing position in various orientations of the present invention;
FIG. 10 is a schematic view of cropping in a direction in front of a viewpoint/viewing position in accordance with the present invention;
FIG. 11 is a schematic view of cropping at a view point/viewing position rear orientation in accordance with the present invention;
fig. 12 is a schematic diagram of cropping in the left direction from a viewpoint/viewing position in the present invention.
The reference numbers illustrate:
10-mobile terminal
20-intelligent device 21-acquisition module 22-calculation module
23-Picture Generation Module 24-Picture fusion Module
30-projection device
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
According to an embodiment of the present invention, as shown in fig. 1, a projection method based on a double-folding screen field includes:
s10, acquiring position reference information corresponding to the viewing position;
specifically, after the viewer enters the viewing space (i.e., the inner side of the two-fold screen field), the viewing position of the viewer is obtained by using the mobile terminal 10 carried by the viewer; the mobile terminal 10 thereof is capable of performing indoor positioning. The mobile terminal 10 may be a mobile phone, a tablet computer, an intelligent bracelet, etc., and integrates an indoor positioning function on a device frequently used by a viewer at ordinary times; or a hand-held terminal and the like can be specially produced, and the indoor positioning function is integrated.
S20, calculating the azimuth viewing angles of the viewing positions in two azimuths by combining the position reference information;
specifically, at different positions, the perspective view of a person may also be different at each orientation; if at different positions, the pictures presented by watching the same object at the same direction are different; the different pictures are seen because the perspective view angle changes when the object is viewed.
The position information of the watching position comprises X-axis coordinate information, Y-axis coordinate information and Z-axis coordinate information, and two azimuth viewing angles can be calculated through the position information of the watching position; for example: a forward azimuth view, a left azimuth view; or the front azimuth angle and the lower azimuth angle; or an upper azimuth view, a forward azimuth view.
One projection surface is set as the front according to actual requirements, and the orientation of the other projection surface is confirmed according to the position of the projection surface in front. For example: the projection plane located on the left side of the projection plane located in front is oriented to the left.
S30, generating a virtual picture corresponding to each azimuth angle by the virtual scene according to each azimuth angle;
specifically, the virtual scene is an integral picture, and the virtual scene may be a scene in which a house is decorated, a scene displayed in a commodity room, a scene displayed in a commodity, and the like. Cutting a virtual scene in a three-dimensional space; after the azimuth visual angle of the watching position is calculated, if the azimuth visual angle in front is combined, the virtual scene is cut into a virtual picture in front in a three-dimensional space; in this way, virtual screens of the rear, left, right, upper, and lower sides can be obtained.
Because only two screen folding fields are applied in the embodiment, only the virtual pictures corresponding to two directions are cut.
S40, fusing the virtual pictures of the two azimuth viewing angles into a scene picture of the virtual scene viewed at the viewing position, and projecting the scene picture to a corresponding double-folding screen field;
specifically, after the virtual pictures in the two directions are obtained, the virtual pictures in the two directions are seamlessly spliced and fused to form a complete scene picture viewed at the viewing position, and the complete scene picture is projected to a two-fold screen field for the user to stand at the viewing position to view.
The two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
Specifically, because the projection size in two book curtain places is variable, need select the projection size of two projection faces in two book curtain places according to virtual scene, the effect is real when guaranteeing virtual scene projection to two book curtain places. For example: the two-folding screen field consists of a front projection surface and a left projection surface, and the corresponding sizes of the front projection surface and the left projection surface are 3X 2.8 meters and 4X 2.8 meters respectively.
There are various ways to vary the size of the double-folded screen field, for example: 1. the projection surface is a movable wall surface which can move along the guide rail, so that the projectable range of the two-fold screen field is changed; 2. the projection surface is a movable wall surface which can be folded and contracted like a folding fan, the corresponding movable wall surface is folded and contracted according to actual requirements, and the projectable range of the two-folding screen field is adjusted; 3. the projection surface is a plurality of curtains which can be pulled up, the curtains corresponding to the plurality of projection surfaces are arranged above according to a grid, the width of the curtains is set according to requirements (for example, 50cm), when the projection surfaces in the front and left directions are needed, the curtains in the corresponding positions can be put down, the specific number of the curtains to be put down is determined according to the projection sizes of the front and left projection surfaces, the front projection surface is 3 meters long, and a single curtain is 50cm, and 6 curtains are put down to form the front projection surface. Of course, other ways of making the projection size of the two-fold screen field variable are also included, and are not limited herein.
It should be noted that the direction of any one projection surface can be set to be the front direction, and the directions of other wall surfaces are determined according to the position of the wall surface in front.
The specific locations of the two wall surfaces are various, for example: a two-folding screen field consisting of the front and left projection surfaces; a two-fold screen field consisting of the front and the right projection surfaces; a two-fold screen field formed by the front and lower projection surfaces, and the like. As long as the two projection surfaces are connected to each other to form a whole with a three-dimensional space (as a projection).
One or more projection devices 30 are correspondingly arranged on each projection surface to provide a basis for projection of subsequent scene pictures. The plurality of projection devices 30 are responsible for one projection surface to ensure that a high-definition projection effect can be obtained even if the projection is performed on a wall surface with a long length (for example, 6 meters or 8 meters). For example: one projection surface 8 meters long is responsible for projecting half by 2 projection devices 30, one projection surface projects a scene picture 4 meters in front of the projection surface, and the other projection surface projects a scene picture 4 meters behind the projection surface, so that the projection effect is ensured.
In this embodiment, when the position reference information corresponding to the viewing position is obtained, the position reference information may be two types of position information:
in the first type, the position reference information is virtual position information:
converting viewing position information in a viewing space into virtual position information in a virtual scene according to a corresponding relation between a space coordinate of the viewing space (namely, in a double-folding screen field) and a virtual coordinate of the virtual scene; and uses the virtual position information coordinates as position reference information.
Specifically, under the condition of real-time rendering, viewing position information is converted into virtual position information, and the calculation of the azimuth angle of view and the generation of a virtual picture are completed through the virtual position information. The essence of real-time rendering is the real-time computation and output of graphics data.
In the second type, the position reference information is position pixel information:
converting viewing position information in the viewing space into position pixel information in the virtual scene according to the corresponding relation between the space coordinates of the viewing space and the picture pixels of the virtual scene; and the positional pixel information is used as positional reference information.
Specifically, under the condition of offline rendering, viewing position information is converted into position pixel information, and the calculation of the azimuth viewing angle and the generation of a virtual picture are completed through the position pixel information.
Wherein, a scene model of the virtual scene and a space model of the viewing space (namely a model of the double-folding screen field) are in a specific proportional relationship; the viewing space is a two-fold screen field formed by two adjacent wall surfaces as shown in fig. 6. The specific proportion relation is 1: 1.
when the size of the projection plane (namely the projection size) in the double-folding screen field in the real space is changed, the size adaptability of the scene model of the virtual scene is changed, and the following conditions are ensured: 1, in a ratio of 1.
Calculating two corresponding azimuth viewing angles at different viewing positions, wherein the same azimuth has different azimuth viewing angles due to different viewing positions; and aiming at different azimuth viewing angles, virtual pictures generated in the same azimuth are different. In the same viewing position, the virtual pictures in the two directions are seamlessly spliced to form a complete scene picture which is projected to the two-fold screen field, and the viewing angle of the virtual picture changes along with the position change of a viewer, so that the viewing angle of the viewer can be kept to be updated in real time, and the displayed three-dimensional scene picture can be updated in time; the stereoscopic scene image presented by the stereoscopic scene image display device cannot be distorted due to the change of the viewing position.
According to another embodiment of the present invention, as shown in fig. 2, a projection method based on a double-folding screen field includes:
s10, acquiring position reference information corresponding to the viewing position;
and S21, combining the position reference information and the two azimuth angle calculation formulas to calculate the azimuth angles of the viewing position in the two azimuths respectively.
Specifically, when two azimuth viewing angles of the two-fold screen field need to be calculated, for example, the azimuth viewing angles of the front azimuth and the left azimuth; the front azimuth viewing angle can be calculated by utilizing a viewing angle calculation formula of the front azimuth viewing angle; the left azimuth viewing angle can be calculated by using a viewing angle calculation formula of the left azimuth viewing angle.
As shown in fig. 10, the forward azimuth angle is FOV, FOV is 2 ∠ θ, tan θ is (L)12+ s)/y; where L1 is the lateral length of the viewing space, i.e., the length of the front wall in the three-fold screen field, s is the lateral offset from the center of the viewing space, and y is the viewing distance directly in front of the viewing space.
As shown in fig. 11, the left azimuth angle is FOV, FOV is 2 ∠α, tan α is (L)22+ p)/x; where L2 is the longitudinal length of the viewing space, p is the longitudinal offset value from the central position of the viewing space, and x is the viewing distance to the left within the viewing space.
When the position information of the viewpoint o is known, the azimuth viewing angle of each azimuth can be calculated, and the azimuth viewing angles corresponding to the right, upper and lower sides of the viewpoint o can also be calculated by a formula, which is not described herein again.
S30, generating a virtual picture corresponding to each azimuth angle by the virtual scene according to each azimuth angle;
optionally, two orthogonal cameras are generated and bound to each other; each orthogonal camera is perpendicular to a projection plane corresponding to the position of the orthogonal camera; and intercepting a virtual picture corresponding to each position in the virtual scene by using the orthogonal camera.
Specifically, binding two orthogonal cameras with each other means that the coordinates of the two orthogonal cameras are the same and located at the same point. According to the angle of the visual angle, the orthogonal camera is perpendicular to the corresponding projection surface (such as the projection surface corresponding to the front), the size and the position of the angle of the visual angle correspond to a unique viewing cone, a part of pictures of the virtual scene are intercepted through the viewing cone, and a plurality of pictures are seamlessly spliced, so that an integral stereoscopic space picture is obtained.
S40, fusing the virtual pictures of the two azimuth viewing angles into a scene picture of the virtual scene viewed at the viewing position, and projecting the scene picture to a corresponding double-folding screen field;
the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
In the embodiment, the azimuth viewing angle corresponding to each azimuth is respectively calculated by using the viewing angle calculation formula corresponding to each azimuth; the accuracy of each azimuth viewing angle can be improved; the accuracy of other azimuth viewing angles cannot be influenced due to the fact that one azimuth viewing angle is calculated wrongly.
When the position viewing angles of the two positions of the viewing position are calculated by combining the position reference information, the following method can be used for calculation:
calculating an azimuth viewing angle in one azimuth of the viewing position by combining the position reference information;
and calculating the azimuth angle of the other azimuth of the two azimuths according to the calculated azimuth angle and the angle relation between the adjacent azimuths.
If two azimuth angles of the two-fold screen field need to be calculated, for example, the front and right azimuth angles, the forward azimuth angle can be calculated by using the angle-of-view calculation formula of the forward azimuth angle, as shown in fig. 10, the forward azimuth angle is FOV, FOV is 2 ∠ θ, and tan θ is (L θ) (L is a distance between the front and the right of the screen field, and the distance between the front and the right of the screen field is equal to (L is a distance between the front12+ s)/y; where L1 is the lateral length of the viewing space, i.e., the length of the front projection plane in the double-folded screen field, s is the lateral offset from the center of the viewing space, and y is the viewing distance in front of the viewing space.
And in different viewing positions, the azimuth angle between the front azimuth viewing angle and the right azimuth viewing angle is a 180-degree fixed angle, and after the front azimuth viewing angle is calculated, the front azimuth viewing angle is subtracted from the 180-degree fixed angle, so that the azimuth viewing angle of the right azimuth can be obtained.
When the virtual picture corresponding to each direction is cut out in the virtual scene according to the two directions corresponding to the two screen folding fields, the direction view angle and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera intercepts the virtual picture corresponding to each direction in the virtual scene.
When the corresponding virtual picture is cut out in the virtual scene according to the cutting area and the azimuth angle corresponding to the cutting area, the azimuth angle, the cutting area and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera cuts out the virtual picture corresponding to each azimuth in the virtual scene.
According to another embodiment provided by the present invention, as shown in fig. 3, a projection method based on a double-folding screen field includes:
s10, acquiring position reference information corresponding to the viewing position;
and S21, combining the position reference information and the two azimuth angle calculation formulas to calculate the azimuth angles of the viewing position in the two azimuths respectively.
S22, generating two orthogonal cameras, and mutually binding the two orthogonal cameras; each orthogonal camera is perpendicular to a projection plane corresponding to the position of the orthogonal camera; and intercepting a virtual picture corresponding to each position in the virtual scene by using the orthogonal camera.
Specifically, binding the orthogonal cameras with each other means that the coordinates of the orthogonal cameras are the same and are located at the same point.
S31, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene;
and S32, when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures according to the cutting areas and the orientation visual angles corresponding to the cutting areas in the virtual scene.
Specifically, the X-axis centerline is a line that is 1/2 transverse length of the viewing space and parallel to the Y-axis; if the viewing space is 4 m long and 2m wide, the X-axis center line is a straight line 1m wide and parallel to the Y-axis.
The X-axis centerline is a line of 1/2 transverse length of the viewing space and parallel to the Y-axis; when the viewing space is expressed in pixels, the specification is 1920dp in length and 1200dp in width, and the X-axis center line is a straight line 600dp in width and parallel to the Y-axis.
When the X coordinate information in the position reference information is 1m or 600dp, if the front position corresponding to the X axis is required according to the actual display condition (i.e. the wall surface position condition of the double-folded screen), a virtual picture corresponding to the front direction view angle can be cut out from the virtual scene, and the virtual picture does not cut out the normal picture of the front direction in the virtual scene; as shown in fig. 8 and 9.
When the position reference information contains Y coordinate information or Z coordinate information, if the Y axis corresponds to the left and right directions, the Z axis corresponds to the upper and lower directions.
The pictures corresponding to the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle are not normal pictures any more, and the normal pictures need to be cut.
According to the actual display condition (namely the wall surface position condition of the two folded screens), one position is selected from a left position visual angle, a right position visual angle, an upper position visual angle and a lower position visual angle, and then virtual pictures corresponding to all the positions are cut out.
Specifically, the viewing position is a central position, and as shown in fig. 7, when the azimuth viewing angles of all the two opposite azimuths are equal, the virtual picture obtained by cutting the virtual scene into normal pictures in each azimuth at the central position is a normal picture.
S33, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene;
and S34, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures according to the cutting areas and the orientation visual angles corresponding to the cutting areas in the virtual scene.
Specifically, when the Y coordinate information in the position reference information is 2m or 960dp, if the Y axis corresponds to two left and right directions, according to the actual display condition (i.e. the wall surface position condition of the two folded screens), a virtual frame corresponding to the left direction view angle can be cut out in the virtual scene, and a virtual frame corresponding to the right direction view angle can be cut out in the virtual scene, and the virtual frame does not cut out the normal frame corresponding to the left and right directions in the virtual scene.
When the position reference information contains X coordinate information or Z coordinate information, if the X axis corresponds to the front position, the Z axis corresponds to the upper position and the lower position.
The pictures corresponding to the front azimuth viewing angle, the upper azimuth viewing angle and the lower azimuth viewing angle are no longer normal pictures, and the normal pictures need to be cut.
According to the actual display condition (namely the wall position condition of the double-folding screen), after one direction is selected from the front direction view angle, the upper direction view angle and the lower direction view angle, the virtual picture corresponding to each direction is cut out.
S35, when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating the cutting areas corresponding to the orientation viewing angles; and cutting out a corresponding virtual picture in the virtual scene according to each azimuth visual angle and the cutting area.
Specifically, when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, the frames corresponding to the front direction view angle, the left direction view angle, the right direction view angle, the upper direction view angle, and the lower direction view angle are no longer normal frames, and the normal frames need to be cut.
According to the actual display condition (namely the wall surface position condition of the two folded screens), two adjacent directions are selected from the front direction view angle, the left direction view angle, the right direction view angle, the upper direction view angle and the lower direction view angle, and then the virtual picture corresponding to each direction is cut out.
S40, fusing the virtual pictures of the two azimuth viewing angles into a scene picture of the virtual scene viewed at the viewing position, and projecting the scene picture to a corresponding double-folding screen field;
the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
In this embodiment, when the virtual picture corresponding to each position is cut out from the virtual scene, the position view angle and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera intercepts the virtual picture corresponding to each position in the virtual scene.
When the corresponding virtual picture is cut out in the virtual scene according to the cutting area and the azimuth angle corresponding to the cutting area, the azimuth angle, the cutting area and the position reference information corresponding to each orthogonal camera are combined, and each orthogonal camera cuts out the virtual picture corresponding to each azimuth in the virtual scene.
In the above embodiments, when the cropping area corresponding to each azimuth viewing angle is calculated, there are two calculation schemes:
the first calculation scheme is as follows:
calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction; and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
Specifically, under the condition that the azimuth viewing angle is known, the position reference information contains the viewing distance; the length of the viewing angle region at each orientation at the viewing position can be calculated as a viewing angle picture parameter.
The viewing space parameter corresponding to each direction refers to the transverse length of the wall corresponding to the direction, and the projectable length is fixed, so that the viewing space parameter is known. And subtracting the length of the wall surface from the length of the view angle area to obtain a cutting area corresponding to each direction.
The second calculation scheme is as follows:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
Specifically, as shown in fig. 10, the virtual screen corresponding to the front view angle has a length of 2s to be cut, the front view angle is FOV, FOV is 2 ∠ θ, and tan θ is (L)12+ s)/y; where L1 is the lateral length of the viewing space, s is the lateral offset from the center position of the viewing space, and y is the viewing distance directly in front within the viewing space.
As shown in fig. 11, the virtual screen corresponding to the left azimuth angle has a length of 2p to be cut, the left azimuth angle is FOV, FOV is 2 ∠α, and tan α is (L)22+ p)/x; where L2 is the longitudinal length of the viewing space, p is the longitudinal offset value from the central position of the viewing space, and x is the viewing distance to the left within the viewing space.
According to an embodiment provided by the present invention, a two-fold screen yard comprises: two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
The projection surface is a movable wall surface, and the projection size of the projection surface is changed in a moving mode; or; the projection screen is a foldable and contractible movable wall surface, and the projection size of the projection surface is changed in a folding and contraction mode; or; the projection surface is a plurality of pieces of curtain cloth capable of being pulled down, and the projection size of the projection surface is changed in a folding and unfolding mode.
Specifically, there are various specific positions of the two wall surfaces, for example: a two-fold screen field consisting of a front wall surface and a left wall surface; a two-fold screen field consisting of the front wall surface and the right wall surface; a double-folding screen field formed by the front and upper wall surfaces, and the like; the two-fold screen field is formed by the front wall and the lower wall. As long as the two wall surfaces are connected with each other to form a whole with a three-dimensional space (as a projection).
One or more projection devices 30 are correspondingly arranged on each wall surface to provide a foundation for projection of subsequent scene pictures. The plurality of projection devices 30 are responsible for one wall surface to ensure that a projection effect with high definition can be obtained even if the projection is performed on a wall surface with a long length (for example, 6 meters or 8 meters). For example: a8-meter-long wall surface is projected by 2 projection devices 30, one projection device projects a scene picture 4 meters in front of the wall surface, and the other projection device projects a scene picture 4 meters behind the wall surface, so that the projection effect is ensured.
There are various ways to vary the size of the double-folded screen field, for example: 1. one or more wall surfaces are movable wall surfaces which can move along the guide rail, so that the projectable range of the two-fold screen field is changed; 2. one or more wall surfaces are movable wall surfaces which can be folded and contracted like folding fans, the corresponding movable wall surfaces are folded and contracted according to actual requirements, and the projectable range of the two-folding screen field is adjusted; 3. the wall surface on the side surface uses a plurality of curtains which are arranged above according to a grid, the width of the curtain is set according to the requirement (for example, 50cm), when the wall surface on the front and left positions is required, the curtain at the corresponding position can be put down, the specific amount of the curtain to be put down is determined according to the front and left sizes, and by taking the front wall surface with the length of 3 meters and a single curtain with the length of 50cm as an example, 6 curtains are put down to form the front wall surface. Of course, other ways of making the projection size of the two-fold screen field variable are also included, and are not limited herein.
In the double-folding curtain field, one wall can be moved, the other wall can be fixed, and the two walls can also be moved and selected according to actual requirements.
The two-fold screen field of this embodiment can carry out the change of projection picture size according to user's demand, and the nimble user that is applied to having different requirements improves user's use and experiences.
According to an embodiment of the present invention, as shown in fig. 4 and 5, a projection system based on a double-folding screen field includes: the intelligent device 20, the projection device 30 and the double-folding screen field; the intelligent device 20 is in communication connection with the projection device 30;
a mobile terminal 10 for acquiring a viewing position in a viewing space;
the smart device 20 includes:
an obtaining module 21, configured to obtain position reference information corresponding to a viewing position;
the calculation module 22 is connected to the acquisition module 21, and is configured to calculate azimuth viewing angles of the viewing position in two azimuths by combining the position reference information;
specifically, two orthogonal cameras are generated and bound to each other (for example, the coordinates of each orthogonal camera are the same); each orthogonal camera is perpendicular to a projection plane corresponding to the position of the orthogonal camera; and intercepting a virtual picture corresponding to each position in the virtual scene by using the orthogonal camera.
The picture generation module 23 is connected with the calculation module 22 and is used for generating a virtual picture corresponding to each azimuth angle according to each azimuth angle in the virtual scene;
the picture fusion module 24 is connected to the picture generation module 23, and is configured to fuse virtual pictures of two azimuth viewing angles into a scene picture of the virtual scene viewed at the viewing position;
the projection device 30 projects the scene picture onto the two-fold screen field, and the inner side of the two-fold screen field forms the viewing position;
the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
In addition to the above, the present embodiment further includes the following contents:
the calculating the azimuth viewing angles of the viewing position at two azimuths by combining the position reference information comprises: and respectively calculating the azimuth viewing angles of the viewing position in the two azimuths by combining the position reference information and the viewing angle calculation formulas of the two azimuths.
The virtual scene generating a virtual picture corresponding to each azimuth angle according to each azimuth angle comprises:
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the X axis in the virtual scene;
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information in the position reference information is not on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out a corresponding virtual picture in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
When the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene;
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information in the position reference information is on the Y-axis central line, respectively calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
When the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles; and cutting out a corresponding virtual picture in the virtual scene according to each azimuth visual angle and the cutting area.
One mode, the calculating the cropping area corresponding to the orientation view specifically includes: calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction;
and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
In another mode, the calculating the cropping area corresponding to the azimuth viewing angle specifically includes: and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
The scene model of the virtual scene and the space model of the viewing space (i.e. the space model of the double-fold scene) have a specific proportional relationship. The specific proportion relation is 1: 1.
specifically, the implementation process of this embodiment of the system is the same as that of the above embodiment of the method, and is not described in detail here. The smart device 20 may be a computer.
In the embodiment, the size of the two-folded screen field is variable, the projection range can be changed according to actual requirements, and the two-folded screen field is flexible and changeable and is wide in application; the scene picture projected to the double-folding screen field can change along with the change of the watching position, so that a viewer really watches the stereoscopic scene image, and the use experience of a user is greatly improved.
In the technical scheme, after one azimuth angle is calculated, other azimuth angles are directly calculated according to the relation between the other azimuth angles, and the calculation is convenient and quick.
In the technical scheme, different view angle calculation formulas are arranged in different directions, and the calculation result is more accurate and reliable.
According to the technical scheme, the virtual picture is obtained by cutting or cutting according to different conditions, so that the obtained scene picture has a stereoscopic impression, and the viewpoint experience of a user is improved.
In the technical scheme, the calculation method of the cutting area is provided, and calculation is convenient.
In the technical scheme, different modes are selected to obtain the virtual picture according to different coordinate information in the position reference information, so that the watching effect of a viewer at the watching position is ensured.
In the technical scheme, another mode for calculating the cutting area is provided, and the method is widely applied.
In the technical scheme, the size (of the projection area) of the two-fold screen field is adjustable, and different requirements can be met.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A projection method based on a double-folding screen field is characterized by comprising the following steps:
acquiring position reference information corresponding to a viewing position;
calculating azimuth viewing angles of the viewing position in two azimuths by combining the position reference information;
the virtual scene generates a virtual picture corresponding to each azimuth angle according to each azimuth angle;
fusing the virtual pictures of the two azimuth viewing angles into a scene picture for watching the virtual scene at the watching position, and projecting the scene picture to a corresponding double-folding screen field;
the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
2. The screen venue-based projection method of claim 1, wherein said calculating an azimuthal perspective of said viewing location in two orientations in combination with location reference information comprises:
and respectively calculating the azimuth viewing angles of the viewing position in the two azimuths by combining the position reference information and the viewing angle calculation formulas of the two azimuths.
3. The projection method based on the double-fold screen field as claimed in claim 1, wherein the generating of the virtual picture corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the X coordinate information in the position reference information is on the X-axis central line and the Y coordinate information is not on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth view angle corresponding to the X axis in the virtual scene;
and calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
4. The projection method based on the double-fold screen field as claimed in claim 1, wherein the generating of the virtual picture corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the X coordinate information in the position reference information is not on the X-axis central line and the Y coordinate information is on the Y-axis central line, cutting the position reference information into corresponding virtual pictures according to the azimuth viewing angle corresponding to the Y axis in the virtual scene;
and calculating cutting areas corresponding to the orientation visual angles corresponding to the coordinate information on the residual axes in the position reference information, and cutting out corresponding virtual pictures in the virtual scene according to the cutting areas and the orientation visual angles corresponding to the cutting areas.
5. The projection method based on the double-fold screen field as claimed in claim 1, wherein the generating of the virtual picture corresponding to each azimuth viewing angle by the virtual scene according to each azimuth viewing angle comprises:
when the X coordinate information is not on the X-axis central line and the Y coordinate information is not on the Y-axis central line, respectively calculating cutting areas corresponding to all the azimuth viewing angles;
and cutting out a corresponding virtual picture in the virtual scene according to each azimuth visual angle and the cutting area.
6. The projection method based on two-fold screen field as claimed in claim 3, 4 or 5, wherein the calculating the cutting area corresponding to the orientation view specifically comprises:
calculating a view angle picture parameter corresponding to each direction according to the position reference information and the direction view angle corresponding to each direction;
and calculating the cutting area corresponding to each direction according to the visual angle picture parameter and the viewing space parameter corresponding to each direction.
7. The projection method based on two-fold screen field as claimed in claim 3, 4 or 5, wherein the calculating of the clipping area corresponding to the azimuth viewing angle specifically comprises:
and analyzing the position deviation information of the position reference information relative to the preset position information, and calculating the corresponding cutting area by combining the position deviation information.
8. A double-folding screen field applied to the projection method based on the double-folding screen field according to any one of claims 1 to 7, which is characterized by comprising the following steps: two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
9. The two-fold screen-based venue of claim 8, comprising:
the projection surface is a movable wall surface, and the projection size of the projection surface is changed in a moving mode;
or;
the projection screen is a foldable and contractible movable wall surface, and the projection size of the projection surface is changed in a folding and contraction mode;
or;
the projection surface is a plurality of pieces of curtain cloth capable of being pulled down, and the projection size of the projection surface is changed in a folding and unfolding mode.
10. A projection system applied to the projection method based on the double-folding screen field as claimed in any one of claims 1 to 7, characterized by comprising: the system comprises intelligent equipment, projection equipment and a two-fold screen field;
the smart device includes:
the acquisition module is used for acquiring position reference information corresponding to the watching position;
the calculation module is used for calculating the azimuth viewing angles of the watching positions in two azimuths by combining the position reference information;
the image generation module is used for generating a virtual image corresponding to each azimuth angle according to each azimuth angle in the virtual scene;
the image fusion module is used for fusing the virtual images of the two azimuth visual angles into a scene image for watching the virtual scene at the watching position;
the projection equipment projects the scene picture to the two-fold screen field, and the inner side of the two-fold screen field forms the viewing position;
the two-folding-curtain field comprises two mutually connected projection surfaces, and the projection size of at least one projection surface is variable.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811185483.1A CN111045286A (en) | 2018-10-11 | 2018-10-11 | Projection method and system based on double-folding screen field and double-folding screen field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811185483.1A CN111045286A (en) | 2018-10-11 | 2018-10-11 | Projection method and system based on double-folding screen field and double-folding screen field |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111045286A true CN111045286A (en) | 2020-04-21 |
Family
ID=70229105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811185483.1A Withdrawn CN111045286A (en) | 2018-10-11 | 2018-10-11 | Projection method and system based on double-folding screen field and double-folding screen field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111045286A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115022615A (en) * | 2022-05-10 | 2022-09-06 | 南京青臣创意数字科技有限公司 | Virtual sensing system and method based on projection |
CN115243029A (en) * | 2022-09-22 | 2022-10-25 | 苏州域光科技有限公司 | Image display method, device, equipment, system and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2898873Y (en) * | 2005-12-16 | 2007-05-09 | 伍炳康 | Amplitude-variable projection screen |
US20120320049A1 (en) * | 2010-03-04 | 2012-12-20 | Tovis Co., Ltd. | Multi-layer image display device |
CN103257454A (en) * | 2013-05-27 | 2013-08-21 | 苏州和氏设计营造股份有限公司 | Naked eye 3D projection display system |
CN103543595A (en) * | 2012-07-12 | 2014-01-29 | 希杰希界维株式会社 | Multi-projection system |
CN107193372A (en) * | 2017-05-15 | 2017-09-22 | 杭州隅千象科技有限公司 | From multiple optional position rectangle planes to the projecting method of variable projection centre |
-
2018
- 2018-10-11 CN CN201811185483.1A patent/CN111045286A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2898873Y (en) * | 2005-12-16 | 2007-05-09 | 伍炳康 | Amplitude-variable projection screen |
US20120320049A1 (en) * | 2010-03-04 | 2012-12-20 | Tovis Co., Ltd. | Multi-layer image display device |
CN103543595A (en) * | 2012-07-12 | 2014-01-29 | 希杰希界维株式会社 | Multi-projection system |
CN103257454A (en) * | 2013-05-27 | 2013-08-21 | 苏州和氏设计营造股份有限公司 | Naked eye 3D projection display system |
CN107193372A (en) * | 2017-05-15 | 2017-09-22 | 杭州隅千象科技有限公司 | From multiple optional position rectangle planes to the projecting method of variable projection centre |
Non-Patent Citations (1)
Title |
---|
周安等: "基于大数据技术的沉浸式虚拟现实可视化展示系统", 《北塔软件》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115022615A (en) * | 2022-05-10 | 2022-09-06 | 南京青臣创意数字科技有限公司 | Virtual sensing system and method based on projection |
CN115243029A (en) * | 2022-09-22 | 2022-10-25 | 苏州域光科技有限公司 | Image display method, device, equipment, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8368687B2 (en) | Image generating apparatus and image generating method | |
US9195121B2 (en) | Markerless geometric registration of multiple projectors on extruded surfaces using an uncalibrated camera | |
CN101843107B (en) | OSMU(one source multi use)-type stereoscopic camera and method of making stereoscopic video content thereof | |
US8259173B2 (en) | Image generating apparatus and image generating method | |
CN101385337B (en) | Method and system for aligning an array of rear-projectors | |
CN107705241B (en) | Sand table construction method based on tile terrain modeling and projection correction | |
US10567649B2 (en) | Parallax viewer system for 3D content | |
CN101189643A (en) | 3D image forming and displaying system | |
US10136121B2 (en) | System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display | |
CN104954769A (en) | Immersion type ultra-high-definition video processing system and method | |
EP0817125A3 (en) | Stereoscopic CG moving image generating apparatus | |
CN104427230A (en) | Reality enhancement method and reality enhancement system | |
CN111050148A (en) | Three-folding-screen-site-based projection method and system and three-folding-screen site | |
US10606149B2 (en) | Information processing device, information processing method, and program | |
CN111045286A (en) | Projection method and system based on double-folding screen field and double-folding screen field | |
CN111179407A (en) | Virtual scene creating method, virtual scene projecting system and intelligent equipment | |
CN208506731U (en) | Image display systems | |
CN111050147A (en) | Projection method and system based on five-fold screen field and five-fold screen field | |
CN111050144A (en) | Projection method and system based on six-fold screen field and six-fold screen field | |
CN111050145B (en) | Multi-screen fusion imaging method, intelligent device and system | |
CN111050156A (en) | Projection method and system based on four-fold screen field and four-fold screen field | |
CN104777700B (en) | Height immerses projection multi-projector Optimization deployment method | |
CN111050146B (en) | Single-screen imaging method, intelligent equipment and system | |
CN111131726B (en) | Video playing method, intelligent device and system based on multi-screen fusion imaging | |
CN105120157A (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200421 |
|
WW01 | Invention patent application withdrawn after publication |