CN108510538B - Three-dimensional image synthesis method and device and computer-readable storage medium - Google Patents

Three-dimensional image synthesis method and device and computer-readable storage medium Download PDF

Info

Publication number
CN108510538B
CN108510538B CN201810277922.5A CN201810277922A CN108510538B CN 108510538 B CN108510538 B CN 108510538B CN 201810277922 A CN201810277922 A CN 201810277922A CN 108510538 B CN108510538 B CN 108510538B
Authority
CN
China
Prior art keywords
images
group
cameras
image
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810277922.5A
Other languages
Chinese (zh)
Other versions
CN108510538A (en
Inventor
周仁义
杨锐
崔磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810277922.5A priority Critical patent/CN108510538B/en
Publication of CN108510538A publication Critical patent/CN108510538A/en
Application granted granted Critical
Publication of CN108510538B publication Critical patent/CN108510538B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a three-dimensional image synthesis method and device and a computer readable storage medium. The device includes: the device comprises an image acquisition module, a color acquisition module and a color acquisition module, wherein the image acquisition module is used for respectively acquiring images shot by cameras, each camera comprises two groups of cameras, each group comprises at least one infrared camera and at least one color camera, the distance between the cameras in each group is smaller than a set threshold value, and the distance between each group is larger than the set threshold value; the position correction module is used for correcting the position parameters of the cameras by adopting the distances among the cameras in the group and the distances among the cameras in each group; and the image synthesis module is used for synthesizing the depth image and the RGB value of each pixel point of the depth image according to the corrected position parameters and the images shot by each camera. After the position parameters of the cameras are corrected by using the distance between the cameras in each group and the distance between each group of cameras, the depth image and the RGB value of each pixel point of the depth image are obtained through synthesis, and a better three-dimensional display effect can be obtained.

Description

Three-dimensional image synthesis method and device and computer-readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for synthesizing three-dimensional images, and a computer-readable storage medium.
Background
Face recognition is the most widely used visual technique for pattern recognition at present. The human face is visually characterized in that: the difference between different individuals is not large, the structures of all human faces are similar, and even the structural shapes of human face organs are similar. Such features are disadvantageous for distinguishing human individuals using human faces. In addition, the shape of the face is unstable, a person can generate many expressions by the change of the face, and the visual images of the face are greatly different at different observation angles, and in addition, face recognition is also affected by various factors such as lighting conditions (e.g., day and night, indoor and outdoor, etc.), many coverings of the face (e.g., masks, sunglasses, hair, beard, etc.), age, and the attitude angle of photographing. Two-dimensional face recognition is face recognition based on a two-dimensional face image. The three-dimensional face recognition is carried out after a three-dimensional face model is constructed on the basis of a two-dimensional face image.
Due to the complexity of the practical application environment, most face recognition systems have degraded recognition performance, especially under non-ideal lighting conditions.
Disclosure of Invention
Embodiments of the present invention provide a three-dimensional image synthesis method, an apparatus, and a computer-readable storage medium to solve at least one of the above technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a three-dimensional image synthesis apparatus, including:
the device comprises an image acquisition module, a color acquisition module and a color acquisition module, wherein the image acquisition module is used for respectively acquiring images shot by cameras, each camera comprises two groups of cameras, each group comprises at least one infrared camera and at least one color camera, the distance between the cameras in each group is smaller than a set threshold value, and the distance between each group is larger than the set threshold value;
the position correction module is used for correcting the position parameters of the cameras by adopting the distances among the cameras in the group and the distances among the cameras in each group;
and the image synthesis module is used for synthesizing the depth image and the RGB value of each pixel point of the depth image according to the corrected position parameters and the images shot by each camera.
With reference to the first aspect, in a first implementation manner of the first aspect, the image synthesis module includes:
the grouping submodule is used for dividing the four shot images into a plurality of groups, and each group comprises three images;
the synthesis submodule is used for synthesizing the corrected position parameters with each group of images respectively to obtain candidate depth images and RGB values of all pixel points of the candidate depth images;
and the selection submodule is used for selecting one depth image from the candidate depth images and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the grouping sub-module is further configured to divide the four captured images into two groups, where each group includes two infrared images and one color image;
the synthesis submodule is also used for synthesizing with each group of images respectively according to the corrected position parameters to obtain two candidate depth images and RGB values of each pixel point;
the selection submodule is also used for selecting one from the two candidate depth images and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
With reference to the first implementation manner of the first aspect, in a third implementation manner of the first aspect, the grouping sub-module is further configured to divide the four captured images into two groups, where each group includes one infrared image and two color images;
the synthesis submodule is also used for synthesizing with each group of images respectively according to the corrected position parameters to obtain two candidate depth images and RGB values of each pixel point;
the selection submodule is also used for selecting one from the two candidate depth images and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
In a second aspect, an embodiment of the present invention provides a three-dimensional image synthesis method, including:
respectively acquiring images shot by cameras, wherein each camera comprises two groups of cameras, each group comprises at least one infrared camera and at least one color camera, the distance between the cameras in each group is smaller than a set threshold value, and the distance between the cameras in each group is larger than the set threshold value;
correcting the position parameters of the cameras by adopting the distance between the cameras in the group and the distance between each group of cameras; and
and obtaining the depth image and the RGB value of each pixel point of the depth image according to the corrected position parameters and the images shot by the cameras.
With reference to the second aspect, in a first implementation manner of the second aspect, the obtaining the depth image and the RGB values of each pixel point of the depth image according to the corrected position parameters and the images captured by the cameras includes:
dividing the four shot images into a plurality of groups, wherein each group comprises three images;
respectively synthesizing the corrected position parameters with each group of images to obtain candidate depth images and RGB values of all pixel points of the candidate depth images;
and selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
With reference to the first implementation manner of the second aspect, in a second implementation manner of the second aspect,
divide four images that will shoot into the multiunit, every group includes three images, includes: dividing the four shot images into two groups, wherein each group comprises two infrared images and a color image;
respectively synthesizing the corrected position parameters with each group of images to obtain a candidate depth image and RGB values of each pixel point of the candidate depth image, wherein the method comprises the following steps: respectively synthesizing the corrected position parameters with each group of images to obtain two candidate depth images and RGB values of each pixel point of the two candidate depth images;
selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point thereof as a synthesis result, wherein the synthesis result comprises the following steps: and selecting one depth image from the two candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
With reference to the first implementation manner of the second aspect, in a third implementation manner of the second aspect,
divide four images that will shoot into the multiunit, every group includes three images, includes: dividing the four shot images into two groups, wherein each group comprises an infrared image and two color images;
respectively synthesizing the corrected position parameters with each group of images to obtain a candidate depth image and RGB values of each pixel point of the candidate depth image, wherein the method comprises the following steps: respectively synthesizing the corrected position parameters with each group of images to obtain two candidate depth images and RGB values of each pixel point of the two candidate depth images;
selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point thereof as a synthesis result, wherein the synthesis result comprises the following steps: and selecting one depth image from the two candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
In a third aspect, an embodiment of the present invention provides a three-dimensional image synthesis apparatus, including:
one or more processors;
storage means for storing one or more programs;
the camera is used for collecting images;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the methods described above.
The functions of the device can be realized by hardware, and can also be realized by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the three-dimensional image synthesizing apparatus includes a processor and a memory, the memory is used for storing a program for supporting the three-dimensional image synthesizing apparatus to execute the three-dimensional image synthesizing method, and the processor is configured to execute the program stored in the memory. The three-dimensional image synthesis device may further include a communication interface for communicating with other devices or a communication network.
In a fourth aspect, the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method described above.
One of the above technical solutions has the following advantages or beneficial effects: after the position parameters of the cameras are corrected by using the distance between the cameras in each group and the distance between each group of cameras, the depth image and the RGB value of each pixel point of the depth image are obtained through synthesis, and a better three-dimensional display effect can be obtained.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is a block diagram showing a configuration of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention.
Fig. 2 is a schematic view showing a camera position of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention.
Fig. 3 is a block diagram showing a configuration of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention.
Fig. 4 shows a flowchart of a three-dimensional image synthesis method according to an embodiment of the present invention.
Fig. 5 illustrates a flowchart of a three-dimensional image synthesis method according to another embodiment of the present invention.
Fig. 6 shows a block diagram of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Fig. 1 is a block diagram showing a configuration of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention. As shown in fig. 1, the three-dimensional image synthesizing apparatus may include:
the image acquisition module 11 is configured to acquire images captured by cameras respectively, where each camera includes two groups of cameras, each group includes at least one infrared camera and at least one color camera, a distance between the cameras in each group is smaller than a set threshold, and a distance between the cameras in each group is larger than the set threshold;
a position correction module 13, configured to correct position parameters of the cameras by using distances between the cameras in the group and distances between each group of cameras;
and the image synthesis module 15 is configured to synthesize the depth image and the RGB values of the pixels of the depth image according to the corrected position parameters and the images captured by the cameras.
In one possible implementation, as shown in fig. 2, one infrared camera IR1 and one color camera RGB3 are provided in the left-eye region, one infrared camera IR2 and one color camera RGB4 are provided in the right-eye region, and the photographing centers of the four cameras are on the same straight line. Wherein the distance between the two cameras of the left eye region is less than a first threshold; the distance between the two cameras of the right eye region is smaller than a second threshold value; the distance between the left eye area and the camera adjacent to the right eye area is larger than a third threshold value; the first threshold is less than the third threshold, and the second threshold is less than the third threshold.
The distance between the cameras in each group is less than a set threshold value, so that the lenses of the cameras in the group are as close as possible, and the threshold value can be determined according to empirical values and adjusted according to the actual effect of image synthesis. The distance between each group is larger than a set threshold value, and enough reference lines can be obtained to form a depth image. The algorithm for obtaining the depth image may be selected from various algorithms, such as BM (Block Matching, module Matching), SGBM (Semi Global Matching ), and the like.
In the embodiment of the invention, each pixel point of the depth image can generally have a gray value, and the gray value can represent the distance between a certain point in a scene and a camera.
In an embodiment of the present invention, the position parameters of the camera may include an internal parameter and an external parameter. Extrinsic parameters such as relative position between cameras, etc., intrinsic parameters such as lens position of cameras, etc.
After the position parameters of the camera are corrected, the corrected position parameters and the RGB value of the color image can be used for obtaining the RGB value of each pixel point of the depth image.
For example, if the arrangement of cameras is from left to right, as shown in fig. 2, the infrared camera IR1, the color camera RGB3, the infrared camera IR2 and the color camera RGB4 are respectively. The distance between the infrared camera IR1 and the center of the color camera RGB3 is L1, the distance between the color camera RGB3 and the center of the infrared camera IR2 is L2, and the distance between the infrared camera IR2 and the center of the color camera RGB4 is L3. Three cameras are controlled to simultaneously photograph the face of a target object such as a person, resulting in two infrared face images S1, S2 and two color face images S3, S4. The position parameters of the camera are corrected according to the distances L1, L2, L3. And then obtaining the depth image and the RGB value of each pixel point of the depth image according to the corrected position parameters and the four images S1, S2, S3 and S4.
According to the embodiment of the invention, the position parameters of the cameras are corrected by using the distance between the cameras in the group and the distance between each group of cameras, and then the depth image and the RGB value of each pixel point of the depth image are synthesized, so that a better three-dimensional display effect can be obtained.
Fig. 3 is a block diagram showing a configuration of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention. On the basis of the above embodiment, as shown in fig. 3, the image synthesis module 15 may include:
a grouping submodule 31 for grouping the four photographed images into a plurality of groups, each group including three images;
the synthesis submodule 33 is configured to synthesize the corrected position parameters with each group of images respectively to obtain candidate depth images and RGB values of each pixel point of the candidate depth images;
and the selection submodule 35 is configured to select one depth image from the candidate depth images, and use the selected depth image and the RGB values of the pixels of the selected depth image as a synthesis result.
There are various ways in which the image synthesis module selects three images from the four captured images. For example, it may be selected randomly; three sheets with highest definition, highest brightness or most appropriate color components can be reserved; two color images can be reserved, and an infrared image with the best effect is reserved; two infrared images can be retained and one color image with the best effect can be retained. In practical application, the method can be flexibly selected according to requirements, or after the method is synthesized in multiple modes, the synthesis result with the best effect is selected.
In the above example, S1, S2, S3 and S4 may be divided into (S1, S2, S3), (S1, S2, S4), (S2, S3, S4), (S1, S3 and S4), and one candidate depth image may be calculated for each group, and then one candidate color image may be selected as the synthesis result. For example, the highest-definition or highest-brightness may be selected as the synthesis result from the candidate depth images.
In a possible implementation manner, the grouping sub-module 31 is further configured to group the four captured images into two groups, where each group includes two infrared images and one color image;
the synthesis submodule 33 is further configured to synthesize the two candidate depth images and the RGB values of the pixels of the two candidate depth images with each group of images according to the corrected position parameters;
the selection submodule 35 is further configured to select one depth image from the two candidate depth images, and use the selected depth image and the RGB values of the pixels of the selected depth image as a synthesis result.
In the above example, S1, S2, S3, and S4 may be divided into (S1, S2, S3), (S1, S2, and S4), two candidate depth images may be calculated, one candidate color image may be selected, and the selected depth image and the RGB values of the pixels may be used as the synthesis result.
In a possible implementation manner, the grouping sub-module 31 is further configured to group the four captured images into two groups, where each group includes one infrared image and two color images;
the synthesis submodule 33 is further configured to synthesize the two candidate depth images and the RGB values of the pixels of the two candidate depth images with each group of images according to the corrected position parameters;
the selection submodule 35 is further configured to select one depth image from the two candidate depth images, and use the selected depth image and the RGB values of the pixels of the selected depth image as a synthesis result.
In the above example, S1, S2, S3, and S4 may be divided into (S2, S3, S4), (S1, S3, and S4), two candidate depth images and RGB values of respective pixels thereof may be calculated, one candidate color image may be selected, and the selected depth image and RGB values of respective pixels thereof may be used as a synthesis result.
According to the three-dimensional image synthesis device, the distance between the cameras in each group is smaller than the set threshold value, so that the lenses of the cameras in the group are as close as possible, and the position parameters of the cameras can be corrected conveniently in the follow-up process; the distance between the two groups is larger than a set threshold value, and a proper depth image can be obtained by taking the distance as a reference line. For example, the smaller the baseline, the closer the range of depth detection, the better the near effect of the depth image, and conversely, the farther the depth detection, the better the far effect.
Furthermore, due to the fact that the infrared camera is used, the texture of the near-infrared component can be increased in daytime, and a dense depth image can be obtained; at night, a depth image of a near scene may also be acquired. The method can obtain reliable depth images in the daytime and at night, so that the accuracy of the blocking height and the experience effect of auxiliary interaction are guaranteed. Therefore, the three-dimensional image synthesis device of the embodiment of the invention has day and night applicability.
Fig. 4 shows a flowchart of a three-dimensional image synthesis method according to an embodiment of the present invention. As shown in fig. 4, the three-dimensional image synthesis method may include the steps of:
101. respectively acquiring images shot by cameras, wherein each camera comprises two groups of cameras, each group comprises at least one infrared camera and at least one color camera, the distance between the cameras in each group is smaller than a set threshold value, and the distance between the cameras in each group is larger than the set threshold value;
102. correcting the position parameters of the cameras by adopting the distance between the cameras in the group and the distance between each group of cameras; and
103. and obtaining the depth image and the RGB value of each pixel point of the depth image according to the corrected position parameters and the images shot by the cameras.
In one possible implementation, referring to fig. 2, one infrared camera and one color camera are arranged in the left eye area, one infrared camera and one color camera are arranged in the right eye area, and the shooting centers of the four cameras are located in the same straight line; the distance between the two cameras of the left eye region is less than a first threshold; the distance between the two cameras of the right eye region is smaller than a second threshold value; the distance between the left eye area and the camera adjacent to the right eye area is larger than a third threshold value; the first threshold is less than the third threshold, and the second threshold is less than the third threshold.
Fig. 5 illustrates a flowchart of a three-dimensional image synthesis method according to another embodiment of the present invention. As shown in fig. 5, in the three-dimensional image synthesis method, step 103 may include:
201. dividing the four shot images into a plurality of groups, wherein each group comprises three images;
202. respectively synthesizing the corrected position parameters with each group of images to obtain candidate depth images and RGB values of all pixel points of the candidate depth images;
203. and selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
In one possible implementation, step 201 includes: dividing the four shot images into two groups, wherein each group comprises two infrared images and a color image;
step 202 comprises: respectively synthesizing the corrected position parameters with each group of images to obtain two candidate depth images and RGB values of each pixel point of the two candidate depth images;
step 203 comprises: and selecting one depth image from the two candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
In one possible implementation, step 201 includes: dividing the four shot images into two groups, wherein each group comprises an infrared image and two color images;
step 202 comprises: respectively synthesizing the corrected position parameters with each group of images to obtain two candidate depth images and RGB values of each pixel point of the two candidate depth images;
step 203 comprises: and selecting one depth image from the two candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
The principle of the three-dimensional image synthesis method in the embodiment of the present invention can be referred to the related description in the embodiment of the three-dimensional image synthesis apparatus, and is not described herein again.
Fig. 6 shows a block diagram of a three-dimensional image synthesizing apparatus according to an embodiment of the present invention. As shown in fig. 6, the three-dimensional image synthesizing apparatus includes: a memory 910 and a processor 920, the memory 910 having stored therein computer programs operable on the processor 920. The processor 920 implements the three-dimensional image synthesis method in the above-described embodiment when executing the computer program. The number of the memory 910 and the processor 920 may be one or more.
The three-dimensional image synthesizing apparatus further includes:
the camera is used for collecting images;
and a communication interface 930 for communicating with an external device to perform data interactive transmission.
Memory 910 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 910, the processor 920 and the communication interface 930 are implemented independently, the memory 910, the processor 920 and the communication interface 930 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, in an implementation, if the memory 910, the processor 920 and the communication interface 930 are integrated on a chip, the memory 910, the processor 920 and the communication interface 930 may complete communication with each other through an internal interface.
An embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program is used for implementing the method of any one of the above embodiments when being executed by a processor.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A three-dimensional image synthesizing apparatus characterized by comprising:
the device comprises an image acquisition module, a color acquisition module and a color acquisition module, wherein the image acquisition module is used for respectively acquiring images shot by cameras, each camera comprises two groups of cameras, each group comprises at least one infrared camera and at least one color camera, the distance between the cameras in each group is smaller than a set threshold value, and the distance between each group is larger than the set threshold value; the distance of the two cameras of the first group is less than a first threshold; the distance of the two cameras of the second group is less than a second threshold; the first group is further from the second group of position-adjacent cameras than a third threshold; the first threshold is less than the third threshold, the second threshold is less than the third threshold;
the position correction module is used for correcting the position parameters of the cameras by adopting the distances among the cameras in the group and the distances among the cameras in each group;
the image synthesis module is used for synthesizing the depth image and the RGB value of each pixel point of the depth image according to the corrected position parameters and the images shot by each camera;
wherein the image synthesis module comprises:
the grouping submodule is used for dividing the four shot images into a plurality of groups, and each group comprises three images;
the synthesis submodule is used for synthesizing the corrected position parameters with each group of images respectively to obtain candidate depth images and RGB values of all pixel points of the candidate depth images;
and the selection submodule is used for selecting one depth image from the candidate depth images and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
2. The apparatus of claim 1,
the grouping submodule is also used for dividing the four shot images into two groups, and each group comprises two infrared images and one color image;
the synthesis submodule is also used for synthesizing with each group of images respectively according to the corrected position parameters to obtain two candidate depth images and RGB values of each pixel point;
the selection submodule is also used for selecting one from the two candidate depth images and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
3. The apparatus of claim 1,
the grouping submodule is also used for dividing the four shot images into two groups, and each group comprises an infrared image and two color images;
the synthesis submodule is also used for synthesizing with each group of images respectively according to the corrected position parameters to obtain two candidate depth images and RGB values of each pixel point;
the selection submodule is also used for selecting one from the two candidate depth images and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
4. A three-dimensional image synthesis method, comprising:
respectively acquiring images shot by cameras, wherein each camera comprises two groups of cameras, each group comprises at least one infrared camera and at least one color camera, the distance between the cameras in each group is smaller than a set threshold value, and the distance between the cameras in each group is larger than the set threshold value; the distance of the two cameras of the first group is less than a first threshold; the distance of the two cameras of the second group is less than a second threshold; the first group is further from the second group of position-adjacent cameras than a third threshold; the first threshold is less than the third threshold, the second threshold is less than the third threshold;
correcting the position parameters of the cameras by adopting the distance between the cameras in the group and the distance between each group of cameras; and
obtaining a depth image and RGB values of pixel points of the depth image according to the corrected position parameters and the images shot by the cameras;
wherein, according to the position parameter after correcting and the image that each camera shot obtain the RGB value of each pixel of the depth map and depth map, include:
dividing the four shot images into a plurality of groups, wherein each group comprises three images;
respectively synthesizing the corrected position parameters with each group of images to obtain candidate depth images and RGB values of all pixel points of the candidate depth images;
and selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
5. The method of claim 4,
divide four images that will shoot into the multiunit, every group includes three images, includes: dividing the four shot images into two groups, wherein each group comprises two infrared images and a color image;
respectively synthesizing the corrected position parameters with each group of images to obtain a candidate depth image and RGB values of each pixel point of the candidate depth image, wherein the method comprises the following steps: respectively synthesizing the corrected position parameters with each group of images to obtain two candidate depth images and RGB values of each pixel point of the two candidate depth images;
selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point thereof as a synthesis result, wherein the synthesis result comprises the following steps: and selecting one depth image from the two candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
6. The method of claim 4,
divide four images that will shoot into the multiunit, every group includes three images, includes: dividing the four shot images into two groups, wherein each group comprises an infrared image and two color images;
respectively synthesizing the corrected position parameters with each group of images to obtain a candidate depth image and RGB values of each pixel point of the candidate depth image, wherein the method comprises the following steps: respectively synthesizing the corrected position parameters with each group of images to obtain two candidate depth images and RGB values of each pixel point of the two candidate depth images;
selecting one depth image from the candidate depth images, and taking the selected depth image and the RGB value of each pixel point thereof as a synthesis result, wherein the synthesis result comprises the following steps: and selecting one depth image from the two candidate depth images, and taking the selected depth image and the RGB value of each pixel point as a synthesis result.
7. A three-dimensional image synthesizing apparatus characterized by comprising:
one or more processors;
storage means for storing one or more programs;
the camera is used for collecting images;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 4-6.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 4 to 6.
CN201810277922.5A 2018-03-30 2018-03-30 Three-dimensional image synthesis method and device and computer-readable storage medium Active CN108510538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810277922.5A CN108510538B (en) 2018-03-30 2018-03-30 Three-dimensional image synthesis method and device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810277922.5A CN108510538B (en) 2018-03-30 2018-03-30 Three-dimensional image synthesis method and device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108510538A CN108510538A (en) 2018-09-07
CN108510538B true CN108510538B (en) 2020-01-17

Family

ID=63379393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810277922.5A Active CN108510538B (en) 2018-03-30 2018-03-30 Three-dimensional image synthesis method and device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108510538B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446910A (en) * 2020-12-11 2021-03-05 杭州海康机器人技术有限公司 Depth image obtaining method and device, electronic equipment and storage medium
CN112465891A (en) * 2020-12-11 2021-03-09 杭州海康机器人技术有限公司 Depth image obtaining method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104604220A (en) * 2012-09-03 2015-05-06 Lg伊诺特有限公司 Image processing system
CN106572339A (en) * 2016-10-27 2017-04-19 深圳奥比中光科技有限公司 Image collector and image collecting system
CN106780589A (en) * 2016-12-09 2017-05-31 深圳奥比中光科技有限公司 A kind of method for obtaining target depth image
CN206350072U (en) * 2016-11-08 2017-07-21 聚晶半导体股份有限公司 Photographing module and camera device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9743069B2 (en) * 2012-08-30 2017-08-22 Lg Innotek Co., Ltd. Camera module and apparatus for calibrating position thereof
JP2014092461A (en) * 2012-11-02 2014-05-19 Sony Corp Image processor and image processing method, image processing system, and program
TWI503618B (en) * 2012-12-27 2015-10-11 Ind Tech Res Inst Device for acquiring depth image, calibrating method and measuring method therefore
KR20150004989A (en) * 2013-07-03 2015-01-14 한국전자통신연구원 Apparatus for acquiring 3d image and image processing method using the same
EP2871843B1 (en) * 2013-11-12 2019-05-29 LG Electronics Inc. -1- Digital device and method for processing three dimensional image thereof
KR102214193B1 (en) * 2014-03-25 2021-02-09 삼성전자 주식회사 Depth camera device, 3d image display system having the same and control methods thereof
CN106846350B (en) * 2016-11-23 2019-09-24 杭州视氪科技有限公司 One kind is based on RGB-D camera and stereosonic visually impaired people's barrier early warning system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104604220A (en) * 2012-09-03 2015-05-06 Lg伊诺特有限公司 Image processing system
CN106572339A (en) * 2016-10-27 2017-04-19 深圳奥比中光科技有限公司 Image collector and image collecting system
CN206350072U (en) * 2016-11-08 2017-07-21 聚晶半导体股份有限公司 Photographing module and camera device
CN106780589A (en) * 2016-12-09 2017-05-31 深圳奥比中光科技有限公司 A kind of method for obtaining target depth image

Also Published As

Publication number Publication date
CN108510538A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN107977940B (en) Background blurring processing method, device and equipment
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US10872436B2 (en) Spatial positioning method, spatial positioning device, spatial positioning system and computer readable storage medium
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
JP7123736B2 (en) Image processing device, image processing method, and program
US8922627B2 (en) Image processing device, image processing method and imaging device
US10186051B2 (en) Method and system for calibrating a velocimetry system
CN108460368B (en) Three-dimensional image synthesis method and device and computer-readable storage medium
CN111899282A (en) Pedestrian trajectory tracking method and device based on binocular camera calibration
US9406140B2 (en) Method and apparatus for generating depth information
CN109785390B (en) Method and device for image correction
CN111091507A (en) Image processing method, image processing apparatus, electronic device, and storage medium
US9786077B2 (en) Unified image processing for combined images based on spatially co-located zones
KR20170045314A (en) Image processing method and apparatus and electronic device
CN108510538B (en) Three-dimensional image synthesis method and device and computer-readable storage medium
CN110458952A (en) A kind of three-dimensional rebuilding method and device based on trinocular vision
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
CN109785225B (en) Method and device for correcting image
CN111080542B (en) Image processing method, device, electronic equipment and storage medium
KR101597915B1 (en) Image processing apparatus and image processing method
US20200202495A1 (en) Apparatus and method for dynamically adjusting depth resolution
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN110800020A (en) Image information acquisition method, image processing equipment and computer storage medium
CN110197228B (en) Image correction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant