CN108965718B - Image generation method and device - Google Patents

Image generation method and device Download PDF

Info

Publication number
CN108965718B
CN108965718B CN201810877067.1A CN201810877067A CN108965718B CN 108965718 B CN108965718 B CN 108965718B CN 201810877067 A CN201810877067 A CN 201810877067A CN 108965718 B CN108965718 B CN 108965718B
Authority
CN
China
Prior art keywords
image
image sensor
sensor
dimensional scene
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810877067.1A
Other languages
Chinese (zh)
Other versions
CN108965718A (en
Inventor
刘昂
李旭刚
游东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201810877067.1A priority Critical patent/CN108965718B/en
Publication of CN108965718A publication Critical patent/CN108965718A/en
Application granted granted Critical
Publication of CN108965718B publication Critical patent/CN108965718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the disclosure provides an image generation method, an image generation device, a hardware device and a computer readable storage medium. The image generation method comprises the following steps: loading a three-dimensional scene; positioning an image sensor at an origin of the three-dimensional scene; acquiring the attribute of the image sensor; generating a first image according to the properties of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene. According to the technical scheme, the attribute of the image sensor is directly acquired, and the image acquired by the image sensor from the three-dimensional scene is determined according to the attribute of the image sensor, so that the image switching is more flexible.

Description

Image generation method and device
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to an image generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing or recording videos, not only can photographing effects of traditional functions be achieved through built-in photographing software when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be achieved through downloading Application programs (APP for short) from a network side.
At present, some intelligent terminal's APP can generate three-dimensional scene, and after the intelligent terminal generated three-dimensional scene among the prior art, can move the relative position of image sensor in three-dimensional scene through the controlling part, but this mode of movement is not flexible enough, can not switch image sensor from three-dimensional scene collection image in a flexible way, and user experience is not good.
Disclosure of Invention
Provided are an image generation method, an apparatus, an electronic device, and a computer-readable storage medium. According to one aspect of the present disclosure, the following technical solutions are provided:
an image generation method, comprising:
loading a three-dimensional scene;
positioning an image sensor at an origin of the three-dimensional scene;
acquiring the attribute of the image sensor;
generating a first image according to the properties of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene.
Further, the loading the three-dimensional scene includes: and acquiring a background image of the three-dimensional scene, and generating the three-dimensional scene according to the background image.
Further, the three-dimensional scene is a hexahedron, and the background image includes images of six faces of the hexahedron.
Further, the acquiring the attribute of the image sensor includes: and when the attribute of the image sensor is the orientation of the image sensor, reading the data of the pose sensor, and acquiring the orientation of the image sensor according to the data of the pose sensor.
Further, the generating a first image according to the attribute of the image sensor includes: the first image is generated according to the orientation of the image sensor and displayed on a display device.
Further, after the generating the first image according to the attribute of the image sensor, the method further includes: receiving an attribute change command of the image sensor, and changing the attribute of the image sensor according to the attribute change command; generating a second image according to the changed properties of the image sensor, wherein the second image is composed of a partial image of the three-dimensional scene.
Further, the attribute of the image sensor is the type of the image sensor, and the type of the image sensor comprises a front image sensor and a rear image sensor; the receiving an attribute change command of the image sensor, and changing the attribute of the image sensor according to the attribute change command includes: receiving a type change command of the image sensor, and switching the type of the image sensor according to the type change command of the image sensor; generating a second image according to the changed attribute of the image sensor, including: and generating the second image according to the first image corresponding to the type of the sensor before the change.
Further, the attribute of the image sensor is the orientation of the image sensor; the receiving an attribute change command of the image sensor, and changing the attribute of the image sensor according to the attribute change command includes: receiving an orientation change command of the image sensor, and changing the orientation of the image sensor according to the orientation change command; generating a second image according to the changed attribute of the image sensor, including: a second image is generated according to the orientation of the image sensor after the change.
Further, after acquiring the attribute of the image sensor, the method further includes: judging whether a preset target appears in an image acquired by the image sensor; and if the predetermined target appears, extracting the predetermined target from the first image.
Further, the generating a first image according to the attribute of the image sensor includes: generating the first image according to the attribute of the image sensor and the predetermined target.
In order to achieve the above object, according to another aspect of the present disclosure, the following technical solutions are also provided:
an image generation apparatus comprising:
the loading module is used for loading the three-dimensional scene;
a positioning module for positioning an image sensor at an origin of the three-dimensional scene;
the attribute acquisition module is used for acquiring the attribute of the image sensor;
an image generation module to generate a first image according to attributes of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene.
Further, the loading module is further configured to obtain a background image of the three-dimensional scene, and generate the three-dimensional scene according to the background image.
Further, the three-dimensional scene is a hexahedron, and the background image includes images of six faces of the hexahedron.
Further, the attribute obtaining module is further configured to, when the attribute of the image sensor is the orientation of the image sensor, read data of a pose sensor, and obtain the orientation of the image sensor according to the data of the pose sensor.
Further, the image generating module is further configured to generate the first image according to the orientation of the image sensor, and display the first image on a display device.
Further, the image generation apparatus further includes: the attribute change command module is used for receiving an attribute change command of the image sensor and changing the attribute of the image sensor according to the attribute change command; a second image generation module to generate a second image based on the changed properties of the image sensor, wherein the second image is composed of a partial image of a three-dimensional scene.
Further, the attribute of the image sensor is the type of the image sensor, and the type of the image sensor comprises a front image sensor and a rear image sensor; the receiving an attribute change command of the image sensor, and changing the attribute of the image sensor according to the attribute change command includes: receiving a type change command of the image sensor, and switching the type of the image sensor according to the type change command of the image sensor; generating a second image according to the changed attribute of the image sensor, including: and generating the second image according to the first image corresponding to the type of the sensor before the change.
Further, the attribute of the image sensor is the orientation of the image sensor; the receiving an attribute change command of the image sensor, and changing the attribute of the image sensor according to the attribute change command includes: receiving an orientation change command of the image sensor, and changing the orientation of the image sensor according to the orientation change command; generating a second image according to the changed attribute of the image sensor, including: a second image is generated according to the orientation of the image sensor after the change.
Further, the image generation apparatus further includes: the preset target judgment module is used for judging whether a preset target appears in the image acquired by the image sensor; and the preset target extraction module is used for extracting the preset target from the image acquired by the image sensor if the preset target appears.
Further, the image generating module is further configured to generate the first image according to the attribute of the image sensor and the predetermined target.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
an electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions, such that the processor when executing performs the steps of any of the above methods.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The embodiment of the disclosure provides an image generation method, an image generation device, a hardware device and a computer readable storage medium. The image generation method comprises the following steps: loading a three-dimensional scene; positioning an image sensor at an origin of the three-dimensional scene; acquiring the attribute of the image sensor; generating a first image according to the properties of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene. In the prior art, after the intelligent terminal generates the three-dimensional scene, the relative position of the image sensor in the three-dimensional scene can be moved through the control, but the moving mode is not flexible enough, and the image sensor cannot be flexibly switched to acquire images from the three-dimensional scene.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
FIG. 1 is a schematic flow diagram of an image generation method according to one embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram of an image generation method according to yet another embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a method for determining a viewing range of an image sensor in the embodiment of FIG. 2;
FIG. 4 is a schematic flow diagram of an image generation method according to yet another embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an image generation apparatus according to one embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an image generation apparatus according to yet another embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an image generation device according to yet another embodiment of the present disclosure
FIG. 8 is a schematic structural diagram of an electronic device according to one embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of a computer-readable storage medium according to one embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an image generation terminal according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In order to solve the technical problem of how to improve the user experience effect, the embodiment of the present disclosure provides an image generation method. The image generation method provided by the embodiment can be executed by an image generation apparatus, which can be implemented as software or as a combination of software and hardware, and can be integrally arranged in some device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the image generation method mainly includes steps S1 to S4 as follows. Wherein:
step S1: and loading the three-dimensional scene.
The loading of the three-dimensional scene can be carried out in two modes, wherein the first mode is to directly load a three-dimensional scene template, the three-dimensional scene template is a manufactured three-dimensional scene, and the three-dimensional scene template is obtained and configured to complete the loading of the three-dimensional scene; the second way is to load an image required by a three-dimensional scene, for example, the three-dimensional scene is a hexahedron, at this time, images of six surfaces of the hexahedron three-dimensional scene may be acquired, positions of the 6 images, for example, an upper position, a lower position, a left position, a right position, a front position, and a rear position, are set, and then, when the three-dimensional scene is loaded, the three-dimensional scene is directly generated according to the 6 images. In the second mode, the splicing of the pictures is involved, and feature points can be marked on the images, and during the splicing, the images are spliced according to the feature points to generate a three-dimensional scene.
Step S2: an image sensor is positioned at an origin of the three-dimensional scene.
In this embodiment, the image generation method may be executed in a terminal, where the terminal includes an image sensor, such as a camera, and after the three-dimensional scene is loaded, the position of the image sensor is set at an origin of the three-dimensional scene. Establishing a three-dimensional coordinate system in a three-dimensional scene according to the origin, wherein the position of the terminal can be always set as the origin when the terminal moves; the origin point can also be fixed, when the terminal moves, the terminal moves in a three-dimensional scene, and the coordinates of the terminal can be determined in real time.
Step S3: attributes of the image sensor are acquired.
The attributes of the image sensor in this embodiment may include one or more of a location, an orientation, or a type of the image sensor. The position and orientation of the image sensor can be obtained by data of a pose sensor, the pose sensor is arranged on a terminal where the image sensor is located or is directly arranged together with the image sensor, and the purpose of the pose sensor is to judge the position and the posture of the image sensor. The pose sensor data includes a position describing a position of the image sensor in the three dimensional scene and a pose describing an orientation of the image sensor in the three dimensional scene. The image sensor is of a front type and a rear type and used for distinguishing the position of the image sensor on the terminal, and the front image sensor and the rear image sensor are arranged on the terminal, are respectively positioned on the front face and the rear face of the terminal and are opposite in orientation. The type of the image sensor can be read through an image sensor type flag bit, for example, a buffer may be set on the image sensor or the terminal to store the flag bit of the currently used image sensor, for example, 1 indicates a front camera, and 0 indicates a rear camera.
Step S4: generating a first image according to the properties of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene.
In this step, one or more of the current position, orientation or type of the image sensor is determined according to the attributes of the image sensor, and a first image is generated by integrating the attributes, where the first image is a partial image of a three-dimensional scene captured by the image sensor in the three-dimensional scene, for example, the three-dimensional scene is a house, the first image is a partial image of a room facing the image sensor, and the first image is displayed on a display device, which may be a display device of a terminal where the image sensor is located or a separate display device coupled to the image sensor.
The embodiment of the disclosure provides an image generation method, an image generation device, a hardware device and a computer readable storage medium. The image generation method comprises the following steps: loading a three-dimensional scene; positioning an image sensor at an origin of the three-dimensional scene; acquiring the attribute of the image sensor; generating a first image according to the properties of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene. In the prior art, after the intelligent terminal generates the three-dimensional scene, the relative position of the image sensor in the three-dimensional scene can be moved through the control, but the moving mode is not flexible enough and the image sensor cannot be flexibly switched to acquire images from the three-dimensional scene.
As shown in fig. 2, in an alternative embodiment, after the step S4 of generating the first image according to the attribute of the image sensor, the method may further include:
step S51, receiving an attribute change command of the image sensor, and changing an attribute of the image sensor according to the attribute change command;
step S52, generating a second image based on the changed properties of the image sensor, wherein the second image is composed of partial images of the three-dimensional scene.
The attributes of the image sensor may include location, orientation, or type, as described in step S3.
When the attribute of the image sensor is position and/or orientation, receiving a position and/or orientation change command of the image sensor, and changing the image sensor and/or the orientation according to the position and/or orientation change command; a second image is generated based on the position and/or orientation of the image sensor after the change.
Judging the position and/or orientation of the image sensor according to attitude data of a pose sensor, wherein the attitude sensor is a fusion sensor of a plurality of sensors in the embodiment, and the attitude data can be derived from a gyroscope; and judging partial scene images of the three-dimensional scene falling into the acquisition range of the image sensor according to the orientation. The position data may be derived from a position sensor, from which the relative position of the image sensor in the three-dimensional scene is determined.
In one embodiment, the origin is always located at the position of the image sensor, that is, the three-dimensional scene moves along with the movement of the image sensor regardless of the movement of the image sensor, and the position data in the pose sensor is global position data, and the image of the part of the scene does not change in size with the movement of the image sensor but only changes in image position, and the position data of the image sensor may not be used because the position of the image sensor does not change relatively in the three-dimensional scene.
In one embodiment, the origin does not move with the position of the image sensor, that is, when the image sensor moves, the origin is located at the generation position, the image sensor moves in a relative coordinate system of the three-dimensional scene, and the position data in the pose sensor is relative position data in the relative coordinate system, in which case the range size of the partial scene image changes according to the distance between the image sensor and the partial scene image, and the range size can be calculated by the distance between the image sensor and the plane where the partial scene image is located. As shown in fig. 3, if the image sensor 31, in the first position C, has a viewing range AB on the plane 32 and a perpendicular point O from the center to the plane 32, and if the image sensor 31 is at a distance CO from the plane in the position C, and if the image sensor 31 is moved to the second position C ', has a viewing range a ' B ' on the plane 32, a perpendicular point O from the center to the plane 32, and a distance C ' O from the plane in the position C ', then:
Figure BDA0001753548290000091
this makes it possible to calculate the size of the viewing range of the image sensor when the image sensor is at the new position, and to calculate the range to be displayed on the panoramic image generation apparatus. In this embodiment, the second image may also be generated in combination with the orientation of the image sensor, in combination with the position and orientation. It will be appreciated that the above embodiments are described in cross-section only, and in practice the field of view of the image sensor is a plane, where the lengths AB and a 'B' on the plane are used instead of the plane.
When the attribute of the image sensor is the type of the image sensor, it can be understood that, when the attribute change command of the image sensor is received, the second image may be generated directly according to the first image corresponding to the type of the sensor before the change. For example, the currently used image sensor is a front-facing image sensor, and after receiving the image sensor type change command, the currently used image sensor is switched to a rear-facing image sensor, and at this time, the image in the opposite direction of the first image can be used as the second image, and the scheme can quickly generate the image without reloading the three-dimensional scene for the switched image sensor.
As shown in fig. 4, in an alternative embodiment, after the step S3 of acquiring the attributes of the image sensor, the method further includes:
step S31, judging whether a preset target appears in the image collected by the image sensor;
in this embodiment, the image sensor captures an image in reality, the image in reality includes a plurality of objects in a real scene, such as a table, a chair, a person, and the like in a room scene, and a predetermined object, such as a person, may be set in advance, so that whether the person appears in the image can be identified according to the face feature point. The method for acquiring the feature points is described by taking a human face as an example, the human face contour mainly comprises 5 parts of eyebrows, eyes, a nose, a mouth and cheeks, and sometimes also comprises pupils and nostrils, generally, the complete description of the human face contour is realized, the number of the feature points is about 60, if only a basic structure is described, the detailed description of each part is not needed, or the description of the cheeks is not needed, the number of the feature points can be correspondingly reduced, and if the pupil, the nostril or the characteristics of five sense organs needing more detail are needed to be described, the number of the feature points can be increased. Extracting the human face feature points on the image, namely searching the corresponding position coordinates of each human face contour feature point in the human face image, namely positioning the feature points, wherein the process needs to be carried out based on the corresponding features of the feature points, and after the image features capable of clearly identifying the feature points are obtained, searching and comparing in the image according to the features, and accurately positioning the positions of the feature points on the image. Since feature points occupy only a very small area (usually, the size of only a few to tens of pixels) in an image, the area occupied by features corresponding to the feature points on the image is also very limited and local, and there are two types of feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image features of the feature point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the feature points used by the various implementation methods are different, and the method is suitable for different application scenes.
In this embodiment, a part of the predetermined target may be recognized through a part of the feature points, and then the entire predetermined target may be continuously recognized according to the part, for example, by face recognition, it is determined that a person is present, and then the body and limbs of the person may be recognized according to other feature points of the body of the person.
Step S32, if the predetermined target appears, extracting the predetermined target from the image captured by the image sensor.
If in step S31 a predetermined object, such as a person, is identified, the person is extracted from the real image, where the extraction can be accomplished using a matting operation. Matting is a technique for separating the foreground part of an image from the background, which separates all foreground objects according to certain judgment rules by a user designating a few foreground and background areas in the image. The image may be represented using the following formula:
C=αF+(1-α)B
wherein C represents an image pixel, F represents a foreground image pixel, B represents a background image pixel, alpha represents transparency, alpha is more than or equal to 0 and less than or equal to 1, and foreground matting is a value obtained by solving 3 variables according to the gray value of the current image pixel C. There are many ways of sampling, and a typical one is a full sampling method, which samples with a smaller neighborhood near the known foreground and background and samples with a larger neighborhood far from the known foreground and background, so that the sampling point is more full. Most of the matting methods solve the formula, but the solving mode is different. The present disclosure is not limited to a specific predetermined object extraction method, and any method capable of extracting a specific object in an image may be incorporated in the technical solution of the present disclosure.
In the present disclosure, when the predetermined target extraction is performed by using the matting method, the predetermined target extraction can also be performed in different manners. Typically, after the predetermined object is identified in step S3, the predetermined object and the image in the preset range around the predetermined object are subjected to matting to directly extract the predetermined object; or all foreground images and background images can be extracted, a preset target is obtained from the foreground images, and other foreground images and background images are transparentized, so that the preset target is extracted.
In this embodiment, the step S4 of generating the first image according to the attribute of the image sensor includes:
step S41, generating the first image according to the attributes of the image sensor and the predetermined target.
In this step, a partial image in the three-dimensional scene captured by the image sensor is determined based on the attribute of the image sensor, and the first image is generated based on the partial image in the three-dimensional scene captured by the image sensor and the predetermined target. Specifically, the partial image in the three-dimensional scene is used as a background component of a first image, the preset target is used as a foreground component of the first image, and according to a formula:
C=αF+(1-α)B
when the alpha value is set, the first image after synthesis can be calculated, the alpha value can be preset, or a setting interface can be provided, so that a user can define the alpha value and preview the synthesis effect of different alpha values in real time through a display device.
In this embodiment, a rendering sequence may be generated from the three-dimensional scene and the preset target, in which the three-dimensional scene is rendered first, and then the preset target is rendered, and the three-dimensional scene is set to be transparent (α ═ 1) at a position where the three-dimensional scene and the preset target overlap.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
In order to solve the technical problem that an image sensor cannot be flexibly switched to acquire an image from a three-dimensional scene in the prior art, the embodiment of the disclosure provides an image generation device. The apparatus may perform the steps described in the above-described image generation method embodiments. As shown in fig. 5, the apparatus mainly includes: a loading module 51, a positioning module 52, an attribute acquisition module 53 and an image generation module 54. The loading module 51 is configured to load a three-dimensional scene; a positioning module 52 for positioning an image sensor at an origin of the three-dimensional scene; an attribute acquisition module 53 configured to acquire an attribute of the image sensor; an image generation module 54 configured to generate a first image according to the attribute of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene.
The loading module 51 is further configured to obtain a background image of the three-dimensional scene, and generate the three-dimensional scene according to the background image.
Further, the three-dimensional scene is a hexahedron, and the background image includes images of 6 faces of the hexahedron.
And the attribute acquiring module 53 is further configured to, when the attribute of the image sensor is the orientation of the image sensor, read data of the pose sensor, and acquire the orientation of the image sensor according to the data of the pose sensor.
The image generating module 54 is further configured to generate the first image according to the orientation of the image sensor, and display the first image on a display device.
The image generation apparatus corresponds to the image generation method in the embodiment shown in fig. 1, and specific details may refer to the description of the image generation method, which is not described herein again.
As shown in fig. 6, an image generating apparatus according to another embodiment of the present disclosure is provided. The image generating apparatus shown in fig. 5 is further provided with: an attribute change command module 61 and a second image generation module 62. The attribute change command module 61 is configured to receive an attribute change command of the image sensor, and change an attribute of the image sensor according to the attribute change command; a second image generation module 62 configured to generate a second image according to the changed property of the image sensor, wherein the second image is composed of a partial image of the three-dimensional scene.
The image generation apparatus corresponds to the image generation method in the embodiment shown in fig. 2, and specific details may refer to the description of the image generation method, which is not described herein again.
Fig. 7 shows another embodiment of the image generating apparatus, which is based on the image generating apparatus shown in fig. 5, and further includes: a predetermined target determination module 71, configured to determine whether a predetermined target appears in an image acquired by the image sensor; a predetermined target extracting module 72, configured to extract the predetermined target from the image captured by the image sensor if the predetermined target is present.
In this embodiment, the image generation module 54 is configured to generate the first image according to the attribute of the image sensor and the predetermined target.
The image generation apparatus corresponds to the image generation method in the embodiment shown in fig. 4, and specific details may refer to the description of the image generation method, which is not described herein again.
For detailed descriptions of the working principle, the realized technical effect, and the like of the embodiment of the image generation apparatus, reference may be made to the description related to the embodiment of the image generation method, and details are not repeated here.
Fig. 8 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in fig. 8, an electronic device 80 according to an embodiment of the present disclosure includes a memory 81 and a processor 82.
The memory 81 is used to store non-transitory computer readable instructions. In particular, memory 81 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 82 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 80 to perform desired functions. In one embodiment of the present disclosure, the processor 82 is configured to execute the computer readable instructions stored in the memory 81, so that the electronic device 80 performs all or part of the steps of the image generation method of the embodiments of the present disclosure.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present disclosure.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 9 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 9, a computer-readable storage medium 90 according to an embodiment of the disclosure has non-transitory computer-readable instructions 91 stored thereon. The non-transitory computer readable instructions 91, when executed by a processor, perform all or a portion of the steps of the image generation methods of the embodiments of the present disclosure previously described.
The computer-readable storage medium 90 includes, but is not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable disks), media with built-in rewritable non-volatile memory (e.g., memory cards), and media with built-in ROMs (e.g., ROM cartridges).
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 10 is a diagram illustrating a hardware configuration of an image generation terminal according to an embodiment of the present disclosure. As shown in fig. 10, the image generation terminal 100 includes the above-described image generation apparatus embodiment.
The terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, mobile terminal devices such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation apparatus, a vehicle-mounted terminal device, a vehicle-mounted image generation terminal, a vehicle-mounted electronic rear view mirror, and the like, and fixed terminal devices such as a digital TV, a desktop computer, and the like.
The terminal may also include other components as equivalent alternative embodiments. As shown in fig. 10, the image special effects processing terminal 100 may include a power supply unit 101, a wireless communication unit 102, an a/V (audio/video) input unit 103, a user input unit 104, a sensing unit 105, an interface unit 106, a controller 107, an output unit 108, a storage unit 109, and the like. Fig. 10 illustrates a terminal having various components, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components can alternatively be implemented.
The wireless communication unit 102 allows, among other things, radio communication between the terminal 100 and a wireless communication system or network. The a/V input unit 103 is used to receive audio or video signals. The user input unit 104 may generate key input data to control various operations of the terminal device according to a command input by a user. The sensing unit 105 detects a current state of the terminal 100, a position of the terminal 100, presence or absence of a touch input of the user to the terminal 100, an orientation of the terminal 100, acceleration or deceleration movement and direction of the terminal 100, and the like, and generates a command or signal for controlling an operation of the terminal 100. The interface unit 106 serves as an interface through which at least one external device is connected to the terminal 100. The output unit 108 is configured to provide output signals in a visual, audio, and/or tactile manner. The storage unit 109 may store software programs or the like for processing and control operations performed by the controller 107, or may temporarily store data that has been output or is to be output. The storage unit 109 may include at least one type of storage medium. Also, the terminal 100 may cooperate with a network storage device that performs a storage function of the storage unit 109 through a network connection. The controller 77 generally controls the overall operation of the terminal device. In addition, the controller 107 may include a multimedia module for reproducing or playing back multimedia data. The controller 107 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 101 receives external power or internal power and supplies appropriate power required to operate the respective elements and components under the control of the controller 107.
Various embodiments of the image generation methods presented in this disclosure may be implemented using a computer-readable medium, such as computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the image generation method proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the image generation method proposed by the present disclosure may be implemented in the controller 107. For software implementation, various embodiments of the image generation method presented in the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory unit 109 and executed by the controller 107.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (11)

1. An image generation method, comprising:
loading a three-dimensional scene;
positioning an image sensor at an origin of the three-dimensional scene; wherein the origin does not move with the position of the image sensor;
acquiring the attribute of the image sensor;
generating a first image according to the attributes of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene; wherein the range size of the partial image is determined according to the distance between the image sensor and the partial image;
receiving a type change command of the image sensor, and switching the type of the image sensor according to the type change command of the image sensor; wherein the types of image sensors include a front-facing image sensor and a rear-facing image sensor;
taking an image of an opposite direction of the first image corresponding to the type of the sensor before the change as a second image; wherein the second image is composed of a partial image of a three-dimensional scene.
2. The image generation method of claim 1, wherein the loading the three-dimensional scene comprises:
and acquiring a background image of the three-dimensional scene, and generating the three-dimensional scene according to the background image.
3. The image generation method according to claim 2, characterized in that:
the three-dimensional scene is a hexahedron, and the background image comprises images of six faces of the hexahedron.
4. The image generation method of claim 1, wherein the obtaining attributes of the image sensor comprises:
and when the attribute of the image sensor is the orientation of the image sensor, reading the data of the pose sensor, and acquiring the orientation of the image sensor according to the data of the pose sensor.
5. The image generation method of claim 4, wherein generating the first image according to the attributes of the image sensor comprises:
the first image is generated according to the orientation of the image sensor and displayed on a display device.
6. The image generation method of claim 1, further comprising, after the generating a first image according to the attributes of the image sensor:
receiving an orientation change command of the image sensor, and changing the orientation of the image sensor according to the orientation change command;
a second image is generated according to the orientation of the image sensor after the change.
7. The image generation method of claim 1, further comprising, after acquiring the attributes of the image sensor:
judging whether a preset target appears in an image acquired by the image sensor;
and if the predetermined target appears, extracting the predetermined target from the first image.
8. The image generation method of claim 7, wherein generating the first image according to the attributes of the image sensor comprises:
generating the first image according to the attribute of the image sensor and the predetermined target.
9. An image generation apparatus, comprising:
the loading module is used for loading the three-dimensional scene;
a positioning module for positioning an image sensor at an origin of the three-dimensional scene; wherein the origin does not move with the position of the image sensor;
the attribute acquisition module is used for acquiring the attribute of the image sensor;
an image generation module for generating a first image according to attributes of the image sensor, wherein the first image is composed of a partial image of a three-dimensional scene; wherein the range size of the partial image is determined according to the distance between the image sensor and the partial image; receiving a type change command of the image sensor, and switching the type of the image sensor according to the type change command of the image sensor; wherein the types of image sensors include a front-facing image sensor and a rear-facing image sensor; taking an image of an opposite direction of the first image corresponding to the type of the sensor before the change as a second image; wherein the second image is composed of a partial image of a three-dimensional scene.
10. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image generation method of any of claims 1-8.
11. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image generation method of any one of claims 1 to 8.
CN201810877067.1A 2018-08-03 2018-08-03 Image generation method and device Active CN108965718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810877067.1A CN108965718B (en) 2018-08-03 2018-08-03 Image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810877067.1A CN108965718B (en) 2018-08-03 2018-08-03 Image generation method and device

Publications (2)

Publication Number Publication Date
CN108965718A CN108965718A (en) 2018-12-07
CN108965718B true CN108965718B (en) 2021-03-23

Family

ID=64467085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810877067.1A Active CN108965718B (en) 2018-08-03 2018-08-03 Image generation method and device

Country Status (1)

Country Link
CN (1) CN108965718B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN102436621A (en) * 2011-11-08 2012-05-02 莫健新 Systems and methods for displaying housing estate landscape data and generating housing estate landscape display data
CN103703758A (en) * 2011-07-01 2014-04-02 英特尔公司 Mobile augmented reality system
CN106294918A (en) * 2015-06-10 2017-01-04 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual transparence office system
CN106296783A (en) * 2016-07-28 2017-01-04 众趣(北京)科技有限公司 A kind of combination space overall situation 3D view and the space representation method of panoramic pictures
CN106993126A (en) * 2016-05-11 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of method and device that lens image is expanded into panoramic picture
CN107633547A (en) * 2017-09-21 2018-01-26 北京奇虎科技有限公司 Realize the view data real-time processing method and device, computing device of scene rendering
CN108305327A (en) * 2017-11-22 2018-07-20 北京居然设计家家居连锁集团有限公司 A kind of image rendering method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8207964B1 (en) * 2008-02-22 2012-06-26 Meadow William D Methods and apparatus for generating three-dimensional image data models

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157011A (en) * 2010-12-10 2011-08-17 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
CN103703758A (en) * 2011-07-01 2014-04-02 英特尔公司 Mobile augmented reality system
CN102436621A (en) * 2011-11-08 2012-05-02 莫健新 Systems and methods for displaying housing estate landscape data and generating housing estate landscape display data
CN106294918A (en) * 2015-06-10 2017-01-04 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual transparence office system
CN106993126A (en) * 2016-05-11 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of method and device that lens image is expanded into panoramic picture
CN106296783A (en) * 2016-07-28 2017-01-04 众趣(北京)科技有限公司 A kind of combination space overall situation 3D view and the space representation method of panoramic pictures
CN107633547A (en) * 2017-09-21 2018-01-26 北京奇虎科技有限公司 Realize the view data real-time processing method and device, computing device of scene rendering
CN108305327A (en) * 2017-11-22 2018-07-20 北京居然设计家家居连锁集团有限公司 A kind of image rendering method

Also Published As

Publication number Publication date
CN108965718A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
CN111880657B (en) Control method and device of virtual object, electronic equipment and storage medium
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN110072046B (en) Image synthesis method and device
CN108986016B (en) Image beautifying method and device and electronic equipment
CN110827376A (en) Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN111638784B (en) Facial expression interaction method, interaction device and computer storage medium
US20210027046A1 (en) Method and apparatus for multi-face tracking of a face effect, and electronic device
CN104508680B (en) Improved video signal is tracked
CN112543343B (en) Live broadcast picture processing method and device based on live broadcast with wheat
CN110928411B (en) AR-based interaction method and device, storage medium and electronic equipment
CN114363689B (en) Live broadcast control method and device, storage medium and electronic equipment
CN112308977B (en) Video processing method, video processing device, and storage medium
KR20150011742A (en) User terminal device and the control method thereof
CN108509621A (en) Sight spot recognition methods, device, server and the storage medium of scenic spot panorama sketch
CN111627115A (en) Interactive group photo method and device, interactive device and computer storage medium
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
CN108961314B (en) Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
CN110858409A (en) Animation generation method and device
CN112333498A (en) Display control method and device, computer equipment and storage medium
CN108965718B (en) Image generation method and device
CN108989681A (en) Panorama image generation method and device
US20080122867A1 (en) Method for displaying expressional image
AU2015258346A1 (en) Method and system of transitioning between images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant