CN108989681A - Panorama image generation method and device - Google Patents
Panorama image generation method and device Download PDFInfo
- Publication number
- CN108989681A CN108989681A CN201810877943.0A CN201810877943A CN108989681A CN 108989681 A CN108989681 A CN 108989681A CN 201810877943 A CN201810877943 A CN 201810877943A CN 108989681 A CN108989681 A CN 108989681A
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional scene
- panoramic image
- generation method
- preset target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000011068 loading method Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 abstract description 6
- 239000000284 extract Substances 0.000 abstract description 4
- 238000003384 imaging method Methods 0.000 abstract 2
- 230000000694 effects Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000010354 integration Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The disclosure discloses a kind of panorama image generation method, device, electronic equipment and computer readable storage medium.Wherein, which includes: load three-dimensional scenic;Imaging sensor is positioned to the origin of the three-dimensional scenic;Judge whether predeterminated target occur in described image sensor the first image collected;If there is the predeterminated target, then the predeterminated target is extracted from the first image;According to three-dimensional scenic and the predeterminated target, the panoramic picture is generated.The technical solution will appear in after the predeterminated target in imaging sensor extracts, and merge with three-dimensional scenic, with preferably merging real world images and three-dimensional scenic, keep fusion more careful.
Description
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to a panoramic image generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing or recording videos, not only can photographing effects of traditional functions be achieved through built-in photographing software when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be achieved through downloading Application programs (APP) from a network side, for example, the APP with functions of dim light detection, a beauty camera, super pixels and the like can be achieved. At present, three-dimensional scenes can be generated by APP.
The current three-dimensional scene is generally generated according to a template or an image, when the three-dimensional scene is fused with a real scene acquired by an image sensor, a situation of poor fusion occurs, for example, other images in the real scene are fused into the three-dimensional scene, which affects a display effect
Disclosure of Invention
The patent refers to the field of 'pictorial communication,'.
According to one aspect of the present disclosure, the following technical solutions are provided:
a panoramic image generation method, comprising:
loading a three-dimensional scene;
positioning an image sensor at an origin of the three-dimensional scene;
judging whether a preset target appears in a first image acquired by the image sensor;
if the preset target appears, extracting the preset target from the first image;
and generating the panoramic image according to the three-dimensional scene and the preset target.
Further, the loading the three-dimensional scene includes: and acquiring a background image of the three-dimensional scene, and generating the three-dimensional scene according to the background image.
Further, the three-dimensional scene is a hexahedron, and the background image includes images of 6 faces of the hexahedron.
Further, the determining whether the predetermined target appears in the first image acquired by the image sensor includes: and identifying the characteristic points of the preset target, and judging whether the preset target appears according to the characteristic points.
Further, before determining whether a predetermined target appears in the first image acquired by the image sensor, the method further includes: and reading data of a pose sensor, judging the orientation of an image sensor according to the data of the pose sensor, and displaying a partial scene image of the three-dimensional scene on a panoramic image generation device according to the orientation.
Further, the data of the pose sensor includes a position and a posture, the position is used for describing the position of the image sensor in the three-dimensional scene, and the posture is used for describing the shooting angle of the image sensor.
Further, the generating the panoramic image according to the three-dimensional scene and the predetermined target includes: and generating the panoramic image according to the partial scene image of the three-dimensional scene and the preset target.
Further, the generating the panoramic image according to the three-dimensional scene and the predetermined target includes: and taking the three-dimensional scene as the background of a panoramic image, taking the preset target as the foreground of the panoramic image, and fusing the three-dimensional scene and the preset target to generate the panoramic image.
Further, the extracting the predetermined target from the first image includes: and extracting the foreground and the background of the first image, acquiring the preset target, and transparentizing other foreground and background.
Further, before determining whether a predetermined target appears in the first image acquired by the image sensor, the method further includes: the type of the preset target is selected.
According to another aspect of the present disclosure, the following technical solutions are also provided:
a panoramic image generation apparatus comprising:
the loading module is used for loading the three-dimensional scene;
a positioning module for positioning an image sensor at an origin of the three-dimensional scene;
the preset target judgment module is used for judging whether a preset target appears in a first image acquired by the image sensor;
a predetermined target extraction module for extracting the predetermined target from the first image if the predetermined target appears;
and the panorama generating module is used for generating the panoramic image according to the three-dimensional scene and the preset target.
Further, the loading module is further configured to obtain a background image of the three-dimensional scene, and generate the three-dimensional scene according to the background image.
Further, the three-dimensional scene is a hexahedron, and the background image includes images of 6 faces of the hexahedron.
Further, the predetermined target determining module is further configured to identify a feature point of the predetermined target, and determine whether the predetermined target appears according to the feature point.
Further, the panorama generating module further includes: the foreground and background setting module is used for taking the three-dimensional scene as the background of the panoramic image and taking the preset target as the foreground of the panoramic image; and the fusion module is used for fusing the three-dimensional scene and a preset target to generate the panoramic image.
Further, the predetermined target extraction module further includes: a foreground and background extraction module for extracting the foreground and background of the first image to obtain the predetermined target; and the transparent module is used for transparentizing other foreground and background.
Further, the panoramic image generation apparatus further includes: and the type selection module is used for selecting the type of the preset target.
Further, the panoramic image generation apparatus further includes: and the pose sensor data reading module is used for reading data of a pose sensor, judging the orientation of the image sensor according to the data of the pose sensor, and displaying a partial scene image of the three-dimensional scene on the panoramic image generation device according to the orientation.
Further, the data of the pose sensor includes a position and a posture, the position is used for describing the position of the image sensor in the three-dimensional scene, and the posture is used for describing the shooting angle of the image sensor.
Further, the panorama generating module is configured to generate the panoramic image according to the partial scene image of the three-dimensional scene and the predetermined target.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions, such that the processor when executing performs the steps of any of the above methods.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The embodiment of the disclosure provides a panoramic image generation method, a panoramic image generation device, a hardware device and a computer readable storage medium. The panoramic image generation method comprises the following steps: loading a three-dimensional scene; positioning an image sensor at an origin of the three-dimensional scene; judging whether a preset target appears in a first image acquired by the image sensor; if the preset target appears, extracting the preset target from the first image; and generating the panoramic image according to the three-dimensional scene and the preset target. In the prior art, a real image can only be integrated with a three-dimensional scene, and the integration effect is not fine enough, if an object in the image is embedded with the three-dimensional scene, the technical scheme extracts a preset target in an image sensor and then integrates the preset target with the three-dimensional scene, so that the real image and the three-dimensional scene are better integrated, and the integration is more fine.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
Fig. 1 is a schematic flow chart diagram of a panoramic image generation method according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a panoramic image generation method according to yet another embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a method for determining a viewing range of an image sensor in the embodiment of FIG. 2;
fig. 4 is a schematic structural diagram of a panoramic image generation apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a panoramic image generation apparatus according to yet another embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device according to one embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a computer-readable storage medium according to one embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a panoramic image generation terminal according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In order to solve the technical problem of how to improve the user experience effect, the embodiment of the present disclosure provides a panoramic image generation method. The panoramic image generation method provided by the present embodiment may be executed by a panoramic image generation apparatus, which may be implemented as software or as a combination of software and hardware, and may be integrally provided in some device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the panoramic image generation method mainly includes steps S1 to S5 as follows. Wherein:
step S1: and loading the three-dimensional scene.
The loading of the three-dimensional scene can be carried out in two modes, wherein the first mode is to directly load a three-dimensional scene template, the three-dimensional scene template is a manufactured three-dimensional scene, and the three-dimensional scene template is obtained and configured to complete the loading of the three-dimensional scene; the second way is to load an image required by a three-dimensional scene, for example, the three-dimensional scene is a hexahedron, at this time, images of six surfaces of the hexahedron three-dimensional scene may be acquired, positions of the 6 images, for example, an upper position, a lower position, a left position, a right position, a front position, and a rear position, are set, and then, when the three-dimensional scene is loaded, the three-dimensional scene is directly generated according to the 6 images. In the second mode, the splicing of the pictures is involved, and feature points can be marked on the images, and during the splicing, the images are spliced according to the feature points to generate a three-dimensional scene.
Step S2: an image sensor is positioned at an origin of the three-dimensional scene.
In this embodiment, the panoramic image generation method may be executed in a terminal, where the terminal includes an image sensor, such as a camera, and after the three-dimensional scene is loaded, the position of the image sensor is set at an origin of the three-dimensional scene. Establishing a three-dimensional coordinate system in a three-dimensional scene according to the origin, wherein the position of the terminal can be always set as the origin when the terminal moves; the origin point can also be fixed, when the terminal moves, the terminal moves in a three-dimensional scene, and the coordinates of the terminal can be determined in real time.
Step S3: and judging whether a preset target appears in the first image acquired by the image sensor.
In this embodiment, the image sensor captures a first image in reality, the first image includes a plurality of objects in the real scene, such as a table, a chair, a person, and the like in a room scene, and a predetermined object, such as a person, may be set in advance, and it may be recognized whether the person appears in the first image according to the face feature point. The method for acquiring the feature points is described by taking a human face as an example, the human face contour mainly comprises 5 parts of eyebrows, eyes, a nose, a mouth and cheeks, and sometimes also comprises pupils and nostrils, generally, the complete description of the human face contour is realized, the number of the feature points is about 60, if only a basic structure is described, the detailed description of each part is not needed, or the description of the cheeks is not needed, the number of the feature points can be correspondingly reduced, and if the pupil, the nostril or the characteristics of five sense organs needing more detail are needed to be described, the number of the feature points can be increased. Extracting the human face feature points on the image, namely searching the corresponding position coordinates of each human face contour feature point in the human face image, namely positioning the feature points, wherein the process needs to be carried out based on the corresponding features of the feature points, and after the image features capable of clearly identifying the feature points are obtained, searching and comparing in the image according to the features, and accurately positioning the positions of the feature points on the image. Since feature points occupy only a very small area (usually, the size of only a few to tens of pixels) in an image, the area occupied by features corresponding to the feature points on the image is also very limited and local, and there are two types of feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image features of the feature point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the feature points used by the various implementation methods are different, and the method is suitable for different application scenes.
In this embodiment, a part of the predetermined target may be recognized through a part of the feature points, and then the entire predetermined target may be continuously recognized according to the part, for example, by face recognition, it is determined that a person is present, and then the body and limbs of the person may be recognized according to other feature points of the body of the person.
Before this step S3, the method may further include:
in step S31, the type of the preset target is selected.
In this step, the type of the preset target, such as a character, a house, a tree, a flower, etc., may be selected. Each type of preset target is identified using a different feature point. It will be appreciated that multiple preset target types may be selected simultaneously to identify multiple preset targets.
Step S4: and if the predetermined target appears, extracting the predetermined target from the first image.
If in step S3 a predetermined object, such as a person, is identified, the person is extracted from the real image, where the extraction can be accomplished using a matting operation. Matting is a technique for separating the foreground part of an image from the background, which separates all foreground objects according to certain judgment rules by a user designating a few foreground and background areas in the image. The image may be represented using the following formula:
C=αF+(1-α)B
the method comprises the following steps that C represents an image pixel, F represents a foreground image pixel, B represents a background image pixel, α represents transparency, α is more than or equal to 1 and is more than or equal to 0, foreground matting is a value obtained by solving 3 variables according to the gray value of the current image pixel C, in the solving process, a plurality of methods can be used, one method is a sampling-based method, the foreground and background components of each unknown pixel can be obtained from known pixels around the unknown pixel, and for each unknown pixel, an optimal front background pair is searched in a known area to further solve the α value.
In the present disclosure, when the predetermined target extraction is performed by using the matting method, the predetermined target extraction can also be performed in different manners. Typically, after the predetermined object is identified in step S3, the predetermined object and the image in the preset range around the predetermined object are subjected to matting to directly extract the predetermined object; or all foreground images and background images can be extracted, a preset target is obtained from the foreground images, and other foreground images and background images are transparentized, so that the preset target is extracted.
And step S5, generating the panoramic image according to the three-dimensional scene and the preset target.
Loading a three-dimensional scene in step S1, extracting a predetermined target in step S5, in this step, synthesizing the three-dimensional scene and the predetermined target into a panoramic image, specifically, taking the three-dimensional scene as a background component of the panoramic image, taking the predetermined target as a foreground component of the panoramic image, and according to the formula:
C=αF+(1-α)B
when the α value is set, the panoramic image after the synthesis may be calculated, and the α value may be set in advance, or a setting interface may be provided so that the user can customize the α value and preview the synthesis effect of different α values in real time by the panoramic image generation apparatus.
in this embodiment, a rendering sequence may be generated from the three-dimensional scene and the preset target, in which the three-dimensional scene is rendered first, and then the preset target is rendered, and the three-dimensional scene is set to be transparent (α ═ 1) at a position where the three-dimensional scene and the preset target overlap.
The embodiment of the disclosure provides a panoramic image generation method, a panoramic image generation device, a hardware device and a computer readable storage medium. The panoramic image generation method comprises the following steps: loading a three-dimensional scene; positioning an image sensor at an origin of the three-dimensional scene; judging whether a preset target appears in a first image acquired by the image sensor; if the preset target appears, extracting the preset target from the first image; and generating the panoramic image according to the three-dimensional scene and the preset target. In the prior art, a real image can only be integrated with a three-dimensional scene, and the integration effect is not fine enough, if an object in the image is embedded with the three-dimensional scene, the technical scheme extracts a preset target in an image sensor and then integrates the preset target with the three-dimensional scene, so that the real image and the three-dimensional scene are better integrated, and the integration is more fine.
As shown in fig. 2, in an alternative embodiment, before determining whether the predetermined target appears in the first image acquired by the image sensor in step S3, the method may further include:
step S32, reading data of a pose sensor, determining an orientation of the image sensor according to the data of the pose sensor, and displaying a partial scene image of the three-dimensional scene on a panoramic image generation device according to the orientation.
The pose sensor is arranged on a terminal where the image sensor is located or is directly arranged together with the image sensor, and the pose sensor is used for judging the position and the posture of the image sensor. The data of the pose sensor comprises a position and a gesture, wherein the position is used for describing the position of the image sensor in the three-dimensional scene, and the gesture is used for describing the shooting angle of the image sensor.
In this step, the orientation of the image sensor is determined according to attitude data of a pose sensor, which is a fusion sensor of a plurality of sensors in an embodiment, typically the attitude data therein is derived from a gyroscope; and judging partial scene images of the three-dimensional scene falling into the acquisition range of the image sensor according to the direction, and displaying the partial scene images of the three-dimensional scene on a panoramic image generation device.
In one embodiment, the origin is always located at the position of the image sensor, that is, the origin is located at the center of the three-dimensional scene regardless of the movement of the image sensor, and the three-dimensional scene moves along with the movement of the image sensor.
In another embodiment, the origin does not move with the position of the image sensor, that is, when the image sensor moves, the origin is located at the generating position, the image sensor moves in the relative coordinate system of the three-dimensional scene, and the position data in the pose sensor is the relative position data in the relative coordinate system, in this case, the range size of the partial scene image changes according to the distance between the image sensor and the partial scene image, and the range size can be calculated by the distance between the image sensor and the plane where the partial scene image is located. As shown in fig. 3, if the image sensor 31, in the first position C, has a viewing range AB on the plane 32 and a perpendicular point O from the center to the plane 32, and if the image sensor 31 is at a distance CO from the plane in the position C, and if the image sensor 31 is moved to the second position C ', has a viewing range a ' B ' on the plane 32, a perpendicular point O from the center to the plane 32, and a distance C ' O from the plane in the position C ', then:
this makes it possible to calculate the size of the viewing range of the image sensor when the image sensor is at the new position, and to calculate the range to be displayed on the panoramic image generation apparatus. It will be appreciated that the above embodiments are described in cross-section only, and in practice the field of view of the image sensor is a plane, where the lengths AB and a 'B' on the plane are used instead of the plane.
In this embodiment, the step S5 of generating the panoramic image according to the three-dimensional scene and the predetermined target includes:
step S51, generating the panoramic image according to the partial scene image of the three-dimensional scene and the predetermined target.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
In order to solve the problem of how to improve the image fusion effect, the embodiment of the present disclosure provides a panoramic image generation apparatus. The apparatus may perform the steps described in the above-described panoramic image generation method embodiment. As shown in fig. 4, the apparatus mainly includes: a loading module 41, a positioning module 42, a predetermined target judging module 43, a predetermined target extracting module 44 and a panorama generating module 45. The loading module 41 is configured to load a three-dimensional scene; a positioning module 42 for positioning the image sensor at an origin of the three-dimensional scene; a predetermined target determining module 43, configured to determine whether a predetermined target appears in the first image acquired by the image sensor; a predetermined target extraction module 44, configured to extract the predetermined target from the first image if the predetermined target appears; and a panorama generating module 45, configured to generate the panoramic image according to the three-dimensional scene and the predetermined target.
The loading module 41 is further configured to obtain a background map of the three-dimensional scene, and generate the three-dimensional scene according to the background map.
Optionally, the three-dimensional scene is a hexahedron, and the background map includes images of 6 faces of the hexahedron.
The predetermined target determining module 43 is further configured to identify a feature point of the predetermined target, and determine whether the predetermined target appears according to the feature point.
The panorama generating module 45 further includes: the foreground and background setting module is used for taking the three-dimensional scene as the background of the panoramic image and taking the preset target as the foreground of the panoramic image; and the fusion module is used for fusing the three-dimensional scene and a preset target to generate the panoramic image.
The predetermined target extraction module 44 further includes: a foreground and background extraction module for extracting the foreground and background of the first image to obtain the predetermined target; and the transparent module is used for transparentizing other foreground and background.
The panoramic image generation apparatus further includes: and a type selection module 46 for selecting the type of the preset target.
The panoramic image generation apparatus corresponds to the panoramic image generation method in the embodiment shown in fig. 1, and specific details may refer to the description of the panoramic image generation method, which is not described herein again.
Fig. 5 shows another embodiment of the panoramic image generation apparatus, which is based on the panoramic image generation apparatus shown in fig. 4, and further includes: and a pose sensor data reading module 51, configured to read data of a pose sensor, determine an orientation of the image sensor according to the data of the pose sensor, and display a partial scene image of the three-dimensional scene on the panoramic image generation apparatus according to the orientation.
The data of the pose sensor comprises a position and a gesture, wherein the position is used for describing the position of the image sensor in the three-dimensional scene, and the gesture is used for describing the shooting angle of the image sensor.
The panorama generating module 45 is configured to generate the panoramic image according to the partial scene image of the three-dimensional scene and the predetermined target.
For detailed descriptions of the working principle, the technical effect of the implementation, and the like of the embodiment of the panoramic image generation apparatus, reference may be made to the related descriptions in the foregoing embodiment of the panoramic image generation method, and details are not repeated here.
Fig. 6 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, an electronic device 60 according to an embodiment of the present disclosure includes a memory 61 and a processor 62.
The memory 61 is used to store non-transitory computer readable instructions. In particular, memory 61 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 62 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 60 to perform desired functions. In one embodiment of the present disclosure, the processor 62 is configured to execute the computer readable instructions stored in the memory 61, so that the electronic device 60 performs all or part of the aforementioned steps of the panoramic image generation method according to the embodiments of the present disclosure.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present disclosure.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 7 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 7, a computer-readable storage medium 70 having non-transitory computer-readable instructions 71 stored thereon according to an embodiment of the present disclosure. The non-transitory computer readable instructions 71, when executed by a processor, perform all or part of the steps of the panoramic image generation method of the embodiments of the present disclosure described previously.
The computer-readable storage medium 70 includes, but is not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable disks), media with built-in rewritable non-volatile memory (e.g., memory cards), and media with built-in ROMs (e.g., ROM cartridges).
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 8 is a diagram illustrating a hardware structure of a panoramic image generation terminal according to an embodiment of the present disclosure. As shown in fig. 8, the panoramic image generation terminal 80 includes the above-described panoramic image generation apparatus embodiment.
The terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, mobile terminal devices such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation apparatus, a vehicle-mounted terminal device, a vehicle-mounted panoramic image generation terminal, a vehicle-mounted electronic rearview mirror, and the like, and fixed terminal devices such as a digital TV, a desktop computer, and the like.
The terminal may also include other components as equivalent alternative embodiments. As shown in fig. 8, the image special effects processing terminal 80 may include a power supply unit 81, a wireless communication unit 82, an a/V (audio/video) input unit 83, a user input unit 84, a sensing unit 85, an interface unit 86, a controller 87, an output unit 88, a storage unit 89, and the like. Fig. 8 shows a terminal having various components, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may alternatively be implemented.
The wireless communication unit 82 allows, among other things, radio communication between the terminal 80 and a wireless communication system or network. The a/V input unit 83 is for receiving an audio or video signal. The user input unit 84 may generate key input data to control various operations of the terminal device according to a command input by a user. The sensing unit 85 detects a current state of the terminal 80, a position of the terminal 80, presence or absence of a touch input of the user to the terminal 80, an orientation of the terminal 80, acceleration or deceleration movement and direction of the terminal 80, and the like, and generates a command or signal for controlling an operation of the terminal 80. The interface unit 86 serves as an interface through which at least one external device is connected to the terminal 80. The output unit 88 is configured to provide output signals in a visual, audio, and/or tactile manner. The storage unit 89 may store software programs or the like for processing and controlling operations performed by the controller 87, or may temporarily store data that has been output or is to be output. The storage unit 89 may include at least one type of storage medium. Also, the terminal 80 may cooperate with a network storage device that performs a storage function of the storage unit 89 through a network connection. The controller 77 generally controls the overall operation of the terminal device. In addition, the controller 87 may include a multimedia module for reproducing or playing back multimedia data. The controller 87 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 81 receives external power or internal power and supplies appropriate power required to operate the respective elements and components under the control of the controller 87.
Various embodiments of the panoramic image generation method presented in the present disclosure may be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the panoramic image generation method proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the panoramic image generation method proposed by the present disclosure may be implemented in the controller 87. For software implementation, various embodiments of the panoramic image generation method proposed by the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory unit 89 and executed by the controller 87.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (13)
1. A panoramic image generation method, characterized by comprising:
loading a three-dimensional scene;
positioning an image sensor at an origin of the three-dimensional scene;
judging whether a preset target appears in a first image acquired by the image sensor;
if the preset target appears, extracting the preset target from the first image;
and generating the panoramic image according to the three-dimensional scene and the preset target.
2. The panoramic image generation method of claim 1, wherein the loading the three-dimensional scene comprises:
and acquiring a background image of the three-dimensional scene, and generating the three-dimensional scene according to the background image.
3. The panoramic image generation method according to claim 2, characterized in that:
the three-dimensional scene is a hexahedron, and the background image comprises images of 6 faces of the hexahedron.
4. The panoramic image generation method of claim 1, wherein the determining whether a predetermined target appears in the first image captured by the image sensor comprises:
and identifying the characteristic points of the preset target, and judging whether the preset target appears according to the characteristic points.
5. The panoramic image generation method of claim 1, wherein before determining whether a predetermined target is present in the first image captured by the image sensor, further comprising:
and reading data of a pose sensor, judging the orientation of an image sensor according to the data of the pose sensor, and displaying a partial scene image of the three-dimensional scene on a panoramic image generation device according to the orientation.
6. The panoramic image generation method according to claim 5, characterized in that:
the data of the pose sensor comprises a position and a gesture, wherein the position is used for describing the position of the image sensor in the three-dimensional scene, and the gesture is used for describing the shooting angle of the image sensor.
7. The panoramic image generation method of claim 5, wherein the generating the panoramic image from the three-dimensional scene and the predetermined target comprises:
and generating the panoramic image according to the partial scene image of the three-dimensional scene and the preset target.
8. The panoramic image generation method of claim 1, wherein the generating the panoramic image from the three-dimensional scene and the predetermined target comprises:
and taking the three-dimensional scene as the background of a panoramic image, taking the preset target as the foreground of the panoramic image, and fusing the three-dimensional scene and the preset target to generate the panoramic image.
9. The panoramic image generation method according to claim 1, wherein the extracting the predetermined target from the first image includes:
and extracting the foreground and the background of the first image, acquiring the preset target, and transparentizing other foreground and background.
10. The panoramic image generation method of claim 1, wherein before determining whether a predetermined target is present in the first image captured by the image sensor, further comprising:
the type of the preset target is selected.
11. A panoramic image generation apparatus, comprising:
the loading module is used for loading the three-dimensional scene;
a positioning module for positioning an image sensor at an origin of the three-dimensional scene;
the preset target judgment module is used for judging whether a preset target appears in a first image acquired by the image sensor;
a predetermined target extraction module for extracting the predetermined target from the first image if the predetermined target appears;
and the panorama generating module is used for generating the panoramic image according to the three-dimensional scene and the preset target.
12. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the panoramic image generation method of any one of claims 1-10.
13. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the panoramic image generation method according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810877943.0A CN108989681A (en) | 2018-08-03 | 2018-08-03 | Panorama image generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810877943.0A CN108989681A (en) | 2018-08-03 | 2018-08-03 | Panorama image generation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108989681A true CN108989681A (en) | 2018-12-11 |
Family
ID=64554547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810877943.0A Pending CN108989681A (en) | 2018-08-03 | 2018-08-03 | Panorama image generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108989681A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113360797A (en) * | 2021-06-22 | 2021-09-07 | 北京百度网讯科技有限公司 | Information processing method, device, equipment, storage medium and computer program product |
CN114900621A (en) * | 2022-04-29 | 2022-08-12 | 北京字跳网络技术有限公司 | Special effect video determination method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102157011A (en) * | 2010-12-10 | 2011-08-17 | 北京大学 | Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment |
US20110292166A1 (en) * | 2010-05-28 | 2011-12-01 | Qualcomm Incorporated | North Centered Orientation Tracking in Uninformed Environments |
CN106791419A (en) * | 2016-12-30 | 2017-05-31 | 大连海事大学 | A kind of supervising device and method for merging panorama and details |
CN106896925A (en) * | 2017-04-14 | 2017-06-27 | 陈柳华 | The device that a kind of virtual reality is merged with real scene |
CN107633547A (en) * | 2017-09-21 | 2018-01-26 | 北京奇虎科技有限公司 | Realize the view data real-time processing method and device, computing device of scene rendering |
-
2018
- 2018-08-03 CN CN201810877943.0A patent/CN108989681A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110292166A1 (en) * | 2010-05-28 | 2011-12-01 | Qualcomm Incorporated | North Centered Orientation Tracking in Uninformed Environments |
CN102157011A (en) * | 2010-12-10 | 2011-08-17 | 北京大学 | Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment |
CN106791419A (en) * | 2016-12-30 | 2017-05-31 | 大连海事大学 | A kind of supervising device and method for merging panorama and details |
CN106896925A (en) * | 2017-04-14 | 2017-06-27 | 陈柳华 | The device that a kind of virtual reality is merged with real scene |
CN107633547A (en) * | 2017-09-21 | 2018-01-26 | 北京奇虎科技有限公司 | Realize the view data real-time processing method and device, computing device of scene rendering |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113360797A (en) * | 2021-06-22 | 2021-09-07 | 北京百度网讯科技有限公司 | Information processing method, device, equipment, storage medium and computer program product |
CN113360797B (en) * | 2021-06-22 | 2023-12-15 | 北京百度网讯科技有限公司 | Information processing method, apparatus, device, storage medium, and computer program product |
CN114900621A (en) * | 2022-04-29 | 2022-08-12 | 北京字跳网络技术有限公司 | Special effect video determination method and device, electronic equipment and storage medium |
WO2023207354A1 (en) * | 2022-04-29 | 2023-11-02 | 北京字跳网络技术有限公司 | Special effect video determination method and apparatus, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111880657B (en) | Control method and device of virtual object, electronic equipment and storage medium | |
CN108830892B (en) | Face image processing method and device, electronic equipment and computer readable storage medium | |
CN108958610A (en) | Special efficacy generation method, device and electronic equipment based on face | |
US9516214B2 (en) | Information processing device and information processing method | |
CN110072046B (en) | Image synthesis method and device | |
CN108986016B (en) | Image beautifying method and device and electronic equipment | |
WO2020029554A1 (en) | Augmented reality multi-plane model animation interaction method and device, apparatus, and storage medium | |
CN111638784B (en) | Facial expression interaction method, interaction device and computer storage medium | |
US11276238B2 (en) | Method, apparatus and electronic device for generating a three-dimensional effect based on a face | |
CN109003224B (en) | Face-based deformation image generation method and device | |
WO2020024569A1 (en) | Method and device for dynamically generating three-dimensional face model, and electronic device | |
CN109064387A (en) | Image special effect generation method, device and electronic equipment | |
CN109034063A (en) | Plurality of human faces tracking, device and the electronic equipment of face special efficacy | |
KR20180111970A (en) | Method and device for displaying target target | |
JP2009194681A (en) | Image processing method, image processing program, and image processor | |
US11138743B2 (en) | Method and apparatus for a synchronous motion of a human body model | |
CN113709544B (en) | Video playing method, device, equipment and computer readable storage medium | |
CN110858409A (en) | Animation generation method and device | |
KR20150011742A (en) | User terminal device and the control method thereof | |
CN114363689B (en) | Live broadcast control method and device, storage medium and electronic equipment | |
CN112308977B (en) | Video processing method, video processing device, and storage medium | |
WO2019128086A1 (en) | Stage interactive projection method, apparatus and system | |
CN111627115A (en) | Interactive group photo method and device, interactive device and computer storage medium | |
CN108989681A (en) | Panorama image generation method and device | |
CN108965718B (en) | Image generation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181211 |