CN113436325B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113436325B
CN113436325B CN202110875505.2A CN202110875505A CN113436325B CN 113436325 B CN113436325 B CN 113436325B CN 202110875505 A CN202110875505 A CN 202110875505A CN 113436325 B CN113436325 B CN 113436325B
Authority
CN
China
Prior art keywords
point
distance
color
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110875505.2A
Other languages
Chinese (zh)
Other versions
CN113436325A (en
Inventor
施侃乐
朱恬倩
李雅子
郑文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110875505.2A priority Critical patent/CN113436325B/en
Publication of CN113436325A publication Critical patent/CN113436325A/en
Application granted granted Critical
Publication of CN113436325B publication Critical patent/CN113436325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure relates to an image processing method, an image processing device, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring light field data and three-dimensional data in an image to be processed; determining light field geometric data according to the light field data; determining a point of each pixel in the image to be processed, which is closest to a screen in a three-dimensional space, according to the light field data, the light field geometric data and the three-dimensional data; and rendering the image to be processed according to the color of the nearest point and the position of the nearest point to obtain a target image. According to the scheme, the target image comprising the light field data, the geometric data of the first object and the three-dimensional data of the second object can be obtained, the shielding relation among the objects can be accurately rendered, and the rendering authenticity is improved.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image processing method, an image processing device, electronic equipment and a storage medium.
Background
Currently, the display effect of an image to be processed can be determined based on the distance relationship between respective objects included in the three-dimensional model. Specifically, if a first object is closer than a second object in the image, the first object is displayed in the image.
During actual image processing (or display), there may be an influence of a light field in an image, that is, an object included in light field data may also cause shielding of an object included in an original three-dimensional model, and since the shielding condition of the object in the light field data is not considered in the prior art, the method may not accurately determine the final display effect of the image.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, electronic equipment and a storage medium, which solve the technical problems that the shielding relation between all objects can not be accurately determined in the existing image processing technology, and the display effect of an image can not be accurately determined.
The technical scheme of the embodiment of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method. The method may include: acquiring light field data in an image to be processed and geometric information of a first object of the image to be processed, wherein the light field data is used for representing light ray information of an area where a second object of the image to be processed is located, and the first object and the second object have a shielding relationship; adding geometric information corresponding to the light field data for the second object to obtain the geometric information of the second object; drawing the first object and the second object according to the geometric information of the first object and the geometric information of the second object to determine the distance between each point in the first object and the screen and the distance between each point in the second object and the screen; determining a target value for each point in the first object and each point in the second object, wherein the target value is the minimum value of the distance between the first point and the screen and the distance between the second point and the screen, the first point is any point in the first object, and the second point is a point corresponding to the first point in the second object; and when the target value is smaller than the target distance, updating the target color to the color of the point corresponding to the target value to obtain a target image corresponding to the image to be processed, wherein the target distance is the minimum distance between the point corresponding to the position of the first point in a depth buffer zone and the screen, the target color is the color of the point corresponding to the position of the first point in a color buffer zone, the depth buffer zone is used for storing the minimum distance between each current point of the image to be processed and the screen, and the color buffer zone is used for storing the color of each current point of the image to be processed.
Optionally, adding geometric information corresponding to the light field data to the second object, where obtaining the geometric information of the second object specifically includes: processing the light field data according to a preset algorithm, and determining initial geometric information of the second object; and adjusting the initial geometric information to obtain the geometric information of the second object, wherein the region corresponding to the geometric information of the second object is overlapped with the region corresponding to the light field data.
Optionally, the determining the initial geometric information of the second object specifically includes: determining point cloud data for representing the initial geometric information based on the light information of the area where the second object is located and a multi-angle reconstruction point cloud technology; or determining the geometric shape of the second object in the three-dimensional space, determining parameters for characterizing the geometric shape, and taking the geometric shape characterized by the parameters as the initial geometric information; or, in response to the configuration operation, acquiring a triangular mesh model corresponding to the second object, and determining geometric information included in the triangular mesh model as the initial geometric information.
Optionally, the adjusting the initial geometric information to obtain the geometric information of the second object specifically includes: determining that the three-dimensional vector of the second point in the geometric information of the second object satisfies the following formula:
P=M×P'+D
Wherein P represents a three-dimensional vector of the second point in the geometric information of the second object, M represents a three-dimensional rotation matrix corresponding to the second object, D represents a three-dimensional translation vector corresponding to the second object, and P' represents a three-dimensional vector of the second point in the initial geometric information.
Optionally, the image processing method further includes: creating a color buffer and a depth buffer corresponding to the size of the screen; the target distance is stored in the depth buffer and the target color is stored in the color buffer.
Optionally, the image processing method further includes: determining that a second distance is less than the target distance, the second distance being a distance between the second point and the screen; updating the target distance to the second distance and updating the target color to the color of the second point; when the first distance is the minimum value, updating the second distance to the first distance, wherein the first distance is the distance between the first point and the screen; the updating the target color to the color of the point corresponding to the target value to obtain the target image corresponding to the image to be processed specifically includes: updating the color of the second point to the color of the first point to obtain the target image.
Optionally, the image processing method further includes: creating a mask buffer corresponding to the size of the screen, the mask buffer for storing source information for each distance stored in the depth buffer, wherein the source information for one distance is used to characterize the distance from the first object or the second object; updating each color stored in the color buffer to the color of each point in the first object; the updating the target color to the color of the point corresponding to the target value to obtain the target image corresponding to the image to be processed specifically includes: when the source information of the target distance indicates that the target distance is from the second object, updating the target color to the color of the second point to obtain the target image.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus. The apparatus may include: the device comprises an acquisition module, a processing module and a determination module; the acquisition module is configured to acquire light field data in an image to be processed and geometric information of a first object of the image to be processed, wherein the light field data is used for representing light ray information of an area where a second object of the image to be processed is located, and the first object and the second object have a shielding relationship; the processing module is configured to add geometric information corresponding to the light field data to the second object to obtain the geometric information of the second object; the determining module is configured to draw the first object and the second object according to the geometric information of the first object and the geometric information of the second object so as to determine the distance between each point in the first object and the screen and the distance between each point in the second object and the screen; the determining module is further configured to determine, for each point in the first object and each point in the second object, a target value, the target value being a minimum value of a distance between a first point and the screen and a distance between a second point and the screen, the first point being any one point in the first object, the second point being a point in the second object corresponding to the first point; the processing module is further configured to update a target color to a color of a point corresponding to the target value when the target value is smaller than a target distance, so as to obtain a target image corresponding to the image to be processed, wherein the target distance is a minimum distance between a point corresponding to the position of the first point in a depth buffer and the screen, the target color is a color of a point corresponding to the position of the first point in a color buffer, the depth buffer is used for storing the minimum distance between each current point of the image to be processed and the screen, and the color buffer is used for storing the color of each current point of the image to be processed.
Optionally, the determining module is specifically configured to process the light field data according to a preset algorithm, and determine initial geometric information of the second object; the processing module is specifically configured to adjust the initial geometric information to obtain geometric information of the second object, where an area corresponding to the geometric information of the second object overlaps with an area corresponding to the light field data.
Optionally, the determining module is specifically further configured to determine point cloud data for characterizing the initial geometric information based on ray information of an area where the second object is located and a multi-angle reconstruction point cloud technology; or, the determining module is specifically further configured to determine a geometric shape of the second object in a three-dimensional space, determine a parameter for characterizing the geometric shape, and take the geometric shape characterized by the parameter as the initial geometric information; or, the determining module is specifically further configured to obtain a triangular mesh model corresponding to the second object in response to the configuration operation, and determine geometric information included in the triangular mesh model as the initial geometric information.
Optionally, the determining module is specifically further configured to determine that the three-dimensional vector of the second point in the geometric information of the second object satisfies the following formula:
P=M×P'+D
Wherein P represents a three-dimensional vector of the second point in the geometric information of the second object, M represents a three-dimensional rotation matrix corresponding to the second object, D represents a three-dimensional translation vector corresponding to the second object, and P' represents a three-dimensional vector of the second point in the initial geometric information.
Optionally, the image processing apparatus further comprises a creation module; the creation module is configured to create a color buffer and a depth buffer corresponding to the size of the screen; the processing module is further configured to store the target distance in the depth buffer and store the target color in the color buffer.
Optionally, the determining module is further configured to determine that a second distance is less than the target distance, the second distance being a distance between the second point and the screen; the processing module is further configured to update the target distance to the second distance and update the target color to the color of the second point; the processing module is further configured to update the second distance to the first distance when the first distance is the minimum value, the first distance being a distance between the first point and the screen; the processing module is specifically configured to update the color of the second point to the color of the first point to obtain the target image.
Optionally, the creating module is further configured to create a mask buffer corresponding to the size of the screen, the mask buffer being used to store source information for each distance stored in the depth buffer, wherein the source information for one distance is used to characterize that the distance is from the first object or the second object; the processing module is further configured to update each color stored in the color buffer to a color of each point in the first object; the processing module is specifically configured to update the target color to the color of the second point to obtain the target image when the source information of the target distance indicates that the target distance is from the second object.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, which may include: a processor and a memory configured to store processor-executable instructions; wherein the processor is configured to execute the instructions to implement any of the alternative image processing methods of the first aspect described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having instructions stored thereon, which when executed by an electronic device, enable the electronic device to perform any one of the above-described alternative image processing methods of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the optional image processing method as in any of the first aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
based on any one of the above aspects, in the present disclosure, an image processing apparatus may acquire light field data in an image to be processed and geometric information of a first object of the image to be processed, and may add geometric information corresponding to the light field data to a second object, so as to obtain the geometric information of the second object; then the image processing apparatus may draw the first object and the second object based on the geometric information of the first object and the geometric information of the second object to determine a distance between each point in the first object and the screen and a distance between each point in the second object and the screen; and determining a target value for each point in the first object and each point in the second object; when the target value is smaller than the target distance, the image processing device updates the target color to the color of the point corresponding to the target value so as to obtain a target image corresponding to the image to be processed. In the embodiment of the disclosure, the electronic device may determine a point closest to the screen based on a distance between each point in the first object and the screen, a distance between each point in the second object and the screen, and a minimum distance between each point of the image to be processed stored in the depth buffer, and determine whether to update a color of each point stored in the color buffer to obtain the target image. Thus, according to the scheme of the present disclosure, the geometrical information including the light field data, the first object and the second object can be obtained, so that the shielding relationship between the objects can be accurately rendered, and the rendering authenticity is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the disclosure;
FIG. 2 shows a flow diagram of yet another image processing method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating yet another image processing method provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating yet another image processing method provided by an embodiment of the present disclosure;
FIG. 5 shows a flow diagram of yet another image processing method provided by an embodiment of the present disclosure;
FIG. 6 shows a flow diagram of yet another image processing method provided by an embodiment of the present disclosure;
FIG. 7 is a flow chart illustrating yet another image processing method provided by an embodiment of the present disclosure;
FIG. 8 shows a flow diagram of yet another image processing method provided by an embodiment of the present disclosure;
Fig. 9 shows a schematic structural diagram of an image processing apparatus provided by an embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of still another image processing apparatus provided by an embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
The data referred to in this disclosure may be data authorized by the user or sufficiently authorized by the parties.
As described in the background art, since the influence of the light field data on the image to be processed, in particular, some objects (or objects) in the image to be processed is not considered in the prior art, the final display effect of the image may not be accurately determined.
Based on this, the embodiment of the disclosure provides an image processing method, where an image processing apparatus may obtain light field data in an image to be processed and geometric information of a first object of the image to be processed, and may add geometric information corresponding to the light field data to a second object, so as to obtain the geometric information of the second object; then the image processing apparatus may draw the first object and the second object based on the geometric information of the first object and the geometric information of the second object to determine a distance between each point in the first object and the screen and a distance between each point in the second object and the screen; and determining a target value for each point in the first object and each point in the second object; when the target value is smaller than the target distance, the image processing device updates the target color to the color of the point corresponding to the target value so as to obtain a target image corresponding to the image to be processed. In the embodiment of the disclosure, the electronic device may determine a point closest to the screen based on a distance between each point in the first object and the screen, a distance between each point in the second object and the screen, and a minimum distance between each point of the image to be processed stored in the depth buffer, and determine whether to update a color of each point stored in the color buffer to obtain the target image. Thus, according to the scheme of the present disclosure, the geometrical information including the light field data, the first object and the second object can be obtained, so that the shielding relationship between the objects can be accurately rendered, and the rendering authenticity is improved.
The image processing method, the device, the electronic equipment and the storage medium provided by the embodiment of the disclosure are applied to a scene needing to process an image of a certain image to be processed, and particularly to a scene needing to superimpose light field data and geometric information. When the electronic device obtains the light field data in the image to be processed and the geometric information of the first object of the image to be processed, the effect of overlapping the light field data and the geometric information can be obtained according to the method provided by the embodiment of the disclosure, and the target image is obtained.
The image processing method provided by the embodiment of the present disclosure is exemplarily described below with reference to the accompanying drawings:
it will be appreciated that the electronic device performing the image processing method provided in the embodiments of the present disclosure may be a mobile phone, a tablet computer, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, or the like, in which the content community application may be installed and used, and the present disclosure is not limited in particular to the specific form of the electronic device. The system can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment and the like.
As shown in fig. 1, the image processing method provided by the embodiment of the present disclosure may include S101 to S105.
S101, acquiring light field data in an image to be processed and geometric information of a first object of the image to be processed.
The light field data are used for representing light ray information of an area where a second object of the image to be processed is located, and a shielding relation exists between the first object and the second object.
It will be appreciated that at least two objects (i.e. at least the first object and the second object) may be included in the image to be processed. The light field data in the image to be processed can be used for representing the light ray information of each object included in the image to be processed in the area where the object is located. The geometric information of an object is the position information of the object in three-dimensional euclidean space.
The object in embodiments of the present disclosure may be an object present in the real world, such as an apple or the like; or may be virtual items, such as game equipment, etc., embodiments of the present disclosure are not particularly limited as to the type of object.
S102, adding geometric information corresponding to the light field data for the second object to obtain the geometric information of the second object.
It can be understood that the acquired data (i.e. the light field data in the image to be processed and the geometric information of the first object of the image to be processed) does not include the geometric information of the second object, and the image processing device needs to add the geometric information to the second object, so as to obtain the geometric information of the second object.
S103, drawing the first object and the second object according to the geometric information of the first object and the geometric information of the second object so as to determine the distance between each point in the first object and the screen and the distance between each point in the second object and the screen.
In connection with the description of the above embodiment, it should be understood that the geometric information of the first object includes position information of each point (or each point to be drawn) in the first object, and the image processing apparatus may determine a distance between each point in the first object and the screen based on the position information of each point in the first object and the screen. Similarly, the image processing apparatus may also determine a distance between each point in the second object and the screen based on the position information of each point in the second object and the screen.
S104, determining a target value for each point in the first object and each point in the second object.
The target value is the minimum value of the distance between the first point and the screen and the distance between the second point and the screen, wherein the first point is any point in the first object, and the second point is the point corresponding to the first point in the second object.
It will be appreciated that there may be an occlusion relationship between objects included in the image to be processed, for example, a partial region (or partial point) in a first object occluding a second object. The image processing apparatus needs to determine which point on the object is closest to the screen, i.e. the point closest to the screen (e.g. the first point) will block the corresponding point in the other object (i.e. the point farther from the screen, e.g. the second point), and further determine the final display effect of the image to be processed.
And S105, when the target value is smaller than the target distance, updating the target color to the color of the point corresponding to the target value so as to obtain a target image corresponding to the image to be processed.
The target distance is the minimum distance between the point corresponding to the position of the first point in the depth buffer area and the screen, the target color is the color of the point corresponding to the position of the first point in the depth buffer area, the depth buffer area is used for storing the minimum distance between each current point of the image to be processed and the screen, and the color buffer area is used for storing the color of each current point of the image to be processed.
It will be appreciated that when the distance between a certain point (e.g. the first point) in a certain object and the screen is smaller than the target distance (i.e. the distance between the point in the depth buffer corresponding to the position of the first point and the screen), the image processing apparatus may update the target color (i.e. the color of the point in the color buffer corresponding to the position of the first point) to the color of the first point; otherwise, i.e. when the distance between the first point and the screen is greater than or equal to the target distance, the target color is unchanged.
Alternatively, the image processing apparatus may display the target image after obtaining the target image, or may send the target image to another electronic device, so that the other electronic device displays the target image.
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S101-S105, the image processing apparatus may acquire light field data in an image to be processed and geometric information of a first object of the image to be processed, and may add geometric information corresponding to the light field data to a second object, so as to obtain geometric information of the second object; then the image processing apparatus may draw the first object and the second object based on the geometric information of the first object and the geometric information of the second object to determine a distance between each point in the first object and the screen and a distance between each point in the second object and the screen; and determining a target value for each point in the first object and each point in the second object; when the target value is smaller than the target distance, the image processing device updates the target color to the color of the point corresponding to the target value so as to obtain a target image corresponding to the image to be processed. In the embodiment of the disclosure, the electronic device may determine a point closest to the screen based on a distance between each point in the first object and the screen, a distance between each point in the second object and the screen, and a minimum distance between each point of the image to be processed stored in the depth buffer, and determine whether to update a color of each point stored in the color buffer to obtain the target image. Thus, according to the scheme of the present disclosure, the geometrical information including the light field data, the first object and the second object can be obtained, so that the shielding relationship between the objects can be accurately rendered, and the rendering authenticity is improved.
Referring to fig. 1, as shown in fig. 2, adding geometric information corresponding to the light field data to the second object to obtain the geometric information of the second object may specifically include S1021-S1022.
S1021, processing the light field data according to a preset algorithm, and determining initial geometric information of the second object.
S1022, adjusting the initial geometric information to obtain the geometric information of the second object.
Wherein the region corresponding to the geometric information of the second object overlaps with the region corresponding to the light field data.
It should be understood that the initial geometric information of the second object determined in S1021 may not match (or differ greatly from) the real position and/or the real pose of the first object, so that the image processing apparatus needs to adjust the initial geometric information of the second object to obtain the geometric information of the second object, i.e. obtain the geometric information corresponding to the region overlapping the region corresponding to the light field data.
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S1021-S1022, the image processing apparatus may process the light field data according to a preset algorithm to determine initial geometric information of the second object; the image processing device may then adjust the initial geometry information to obtain the geometry information of the second object. In the embodiment of the disclosure, the image processing device can adjust the initial geometric information of the second object to obtain the geometric information of the second object, and can accurately and effectively determine the geometric information of the second object, thereby improving the accuracy of image rendering.
Referring to fig. 2, as shown in fig. 3, the above-mentioned determination of the initial geometric information of the second object specifically includes S1021a.
S1021a, determining point cloud data used for representing initial geometric information based on the light information of the region where the second object is located and a multi-angle reconstruction point cloud technology.
It should be understood that the ray information of the region where the second object is located is the light field data in the image to be processed. In the embodiment of the disclosure, the image processing apparatus may also determine, based on the light field data (or light ray information) and a multi-angle reconstructed point cloud (multi view stereovision, MVS) technique (specifically, some algorithms included in the MVS technique), some characteristic or representative points in an object of the point cloud data that is first used to represent the initial geometric information, so as to determine the point cloud data that may represent the initial geometric information of the second object.
The technical scheme provided by the embodiment at least has the following beneficial effects: as can be seen from S1021a, the image processing apparatus may determine, based on the light information of the region where the second object is located and the multi-angle reconstruction point cloud technology, point cloud data for representing initial geometric information of the second object, so as to improve the determination efficiency of the initial geometric information, and further improve the rendering efficiency of the image.
Referring to fig. 2, as shown in fig. 4, the determining the initial geometric information of the second object further specifically includes S1021b.
S1021b, determining the geometric shape of the second object in the three-dimensional space, determining parameters for representing the geometric shape, and taking the geometric shape represented by the parameters as initial geometric information.
It will be appreciated that the image processing apparatus may determine whether the second object may be represented by a geometric shape. For example, it is assumed that the image processing apparatus may determine that the second object is an apple, which apple may be represented by a sphere (the shape of the apple is the same as or similar to the shape of the sphere), i.e. the image processing apparatus may determine that the geometry of the second object in three-dimensional space is a sphere. The image processing means may then also determine parameters for characterizing the geometry, such as the centre of sphere and the radius of the sphere, etc., and determine the geometry characterized by the parameters, i.e. the sphere obtained in combination with the centre of sphere and the radius, as the initial geometry information of the second object.
Alternatively, the image processing means may characterize the initial geometrical information based on a method of voxel construction representation (constructive solid geometry, CSG).
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S1021b, the image processing apparatus may determine a geometry of the second object in the three-dimensional space, and determine a parameter for characterizing the geometry, and take the geometry characterized by the parameter as initial geometric information of the second object. According to the scheme provided by the disclosure, the initial geometric information can be determined based on the geometric shape of the second object in the three-dimensional space and the corresponding parameters, so that the determination efficiency of the initial geometric information can be improved, and the rendering efficiency of the image is further improved.
Referring to fig. 2, as shown in fig. 5, the determining the initial geometric information of the second object further specifically includes S1021c.
S1021c, responding to the configuration operation, acquiring a triangular mesh model corresponding to the second object, and determining geometric information included in the triangular mesh model as initial geometric information.
Alternatively, the image processing apparatus may select (or determine) a triangular mesh model corresponding to the second object from a plurality of preset triangular mesh models, and determine geometric information included in the triangular mesh model corresponding to the second object as initial geometric information of the second object. It should be understood that the plurality of preset triangular mesh models may be manually drawn and then stored in the image processing apparatus.
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S1021c, the image processing apparatus may acquire a triangular mesh model of the second object in response to the configuration operation, and determine geometric information included in the triangular mesh model as initial geometric information of the second object. In the embodiment of the disclosure, the image processing device can reasonably and accurately determine the initial geometric information of the second object, so that the rendering efficiency of the image is improved.
In an implementation manner of the embodiment of the present disclosure, the adjusting the initial geometric information to obtain the initial geometric information of the second object may specifically include:
determining that the three-dimensional vector of the second point in the geometric information of the second object satisfies the following formula:
P=M×P'+D
wherein P represents a three-dimensional vector of the second point in the geometric information of the second object, M represents a three-dimensional rotation matrix corresponding to the second object, D represents a three-dimensional translation vector corresponding to the second object, and P' represents a three-dimensional vector of the second point in the initial geometric information.
It should be understood that the initial geometric information of the second object may include a three-dimensional vector of any point (for example, the second point) in the second object, and the image processing apparatus may determine, by using the above formula, the three-dimensional vector of the second point in the geometric information of the second object, that is, perform corresponding rotation and translation on the three-dimensional vector of the second point in the initial geometric information, so as to obtain the three-dimensional vector of the second point in the geometric information of the second object. Thus, the image processing device can determine the three-dimensional vector of each point in the geometric information of the second object, and the geometric information of the second object is obtained.
The technical scheme provided by the embodiment at least has the following beneficial effects: the image processing device can perform corresponding rotation and translation on the initial geometric information (specifically, the three-dimensional vector of each point in the initial geometric information) of the second object to obtain the geometric information (the three-dimensional vector of each point in the geometric information) of the second object, and can accurately and reasonably determine the geometric information of the second object, thereby improving the authenticity of the rendering effect.
Referring to fig. 1, as shown in fig. 6, the image processing method provided in the embodiment of the present disclosure may further include S106 to S107.
S106, creating a color buffer and a depth buffer corresponding to the size of the screen.
It should be understood that the color buffer and the depth buffer are buffers corresponding to the size of the screen, and in particular, the sizes of the two buffers may be the same as the size of the screen, and the sizes of the two buffers may be larger than the size of the screen.
In connection with the above description of the embodiments, it should be understood that the depth buffer is used to store the minimum distance between each current point of the image to be processed and the screen, and the color buffer is used to store the color of each current point of the image to be processed.
Note that the embodiment of the present disclosure does not limit the execution order of S104 and S106 described above. For example, S104 may be performed first and then S106 may be performed, S106 may be performed first and then S104 may be performed, or S104 and S106 may be performed simultaneously. For convenience of example, S104 is performed before S106 is performed in fig. 6.
S107, storing the target distance into a depth buffer, and storing the target color into a color buffer.
In connection with the above description of the embodiments, it should be understood that the target distance is the minimum distance between the point in the depth buffer corresponding to the position of the first point and the screen, the target color is the color of the point in the color buffer corresponding to the position of the first point, and the first point is any one of the points in the first object.
Alternatively, the target distance may be understood as a minimum distance between a point in the depth buffer corresponding to the position of the second point and the screen, and the target color may be understood as a color of a point in the color buffer corresponding to the position of the second point, where the second point is a point in the second object corresponding to the first point.
It will be appreciated that the image processing apparatus may copy the data stored in the color buffer (i.e. the color of each point of the image to be processed) into the screen so that the screen may display the color of each point of the image to be processed, i.e. the target image.
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S106 to S107, the image processing apparatus can create a color buffer and a depth buffer corresponding to the size of the screen; the target distance is then stored in the depth buffer and the target color is stored in the color buffer. In the embodiment of the disclosure, the image processing device may render the to-be-processed image by using the minimum distance between each point in the depth buffer and the screen, the color of the nearest point corresponding to each pixel in the to-be-processed image of each point in the color buffer, and the distance between the nearest point corresponding to each pixel and the screen, so as to accurately obtain the target image and promote the rendering authenticity.
Referring to fig. 6, as shown in fig. 7, the image processing method provided in the embodiment of the present disclosure may further include S108 to S110.
S108, determining that the second distance is smaller than the target distance.
The second distance is the distance between the second point and the screen.
It should be appreciated that when the distance between the second point and the screen (i.e., the second distance) is less than the target distance, it is illustrated that a certain point in the second object (i.e., the second point) is closer to the screen than a point in the depth buffer corresponding to the position of the second point (or the position of the first point). Since the color of each point in the first object is stored in the color buffer in S1042a, it may be necessary to update the data stored in the depth buffer (i.e., the target distance) and update the data stored in the color buffer (i.e., the target color).
In one implementation of the embodiments of the present disclosure, determining the second distance may specifically include:
determining that the second distance satisfies the following formula
N×Q=[x,y,z,t]
D=z/t
Wherein N represents a projection matrix of the three-dimensional space to the screen, Q represents homogeneous coordinates of the second point, and D represents the second distance.
S109, updating the target distance to the second distance and updating the target color to the color of the second point.
It is understood that when the above-described second point is closer to the screen than a point in the depth buffer corresponding to the position of the second point, the image processing apparatus may update the target distance in the depth buffer to the distance between the second point and the screen (i.e., the second distance), and update the target color in the color buffer to the color of the second point.
Optionally, when the second distance is greater than or equal to the target distance, the second point is further away from the screen than the point corresponding to the position of the two points, so that updating of the target distance and the target color is not required.
And S110, when the first distance is the minimum value, updating the second distance to the first distance.
The first distance is a distance between the first point and the screen.
In connection with the description of the above embodiments, it should be understood that the minimum value is the minimum value of the distance between the above-described first point and the screen and the distance between the second point and the screen. When the first distance is the minimum value, that is, the first distance is smaller than the second distance, which means that the first point is closer to the screen than the second point, the image processing apparatus may update the second distance to the first distance, that is, update the target distance stored in the depth buffer to the distance from the point updated (or closest) to the screen, specifically, update the target distance to the second distance, and then update the second distance to the first distance.
Continuing with fig. 7, updating the target color to the color of the point corresponding to the target value to obtain the target image corresponding to the image to be processed may specifically include S1051.
S1051, updating the color of the second point to the color of the first point to obtain the target image.
It will be appreciated that when the first point is closer to the screen than the second point, the image processing apparatus may further update the color of the second point to the color of the first point, that is, update the target color stored in the color buffer to the color of the point closer to (or closest to) the screen, specifically update the target color to the color of the second point, and then update the color of the second point to the color of the first point.
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S108 to S110, and S1051, the image processing apparatus may determine that the second distance (i.e., the distance between the second point and the screen) is smaller than the target distance, then update the target distance to the second distance, and update the target color to the color of the second point; then, when the first distance (i.e., the distance between the first point and the screen) is the minimum value, the second distance is updated to the first distance, and the color of the second point is updated to the color of the first point to obtain the target image. In an embodiment of the disclosure, the image processing apparatus may determine that a distance between a certain point (for example, a second point) in the second object and the screen is smaller than a target distance stored in the depth buffer, and update the target distance to a second distance and then update the second distance to the first distance when a distance between a certain point (for example, a first point) in the first object and the screen is smaller than a distance between the second object and the screen; and updating the target color to the color of the second point, then updating the color of the second point to the color of the first point, drawing the second object (specifically, each point in the second object), and drawing the first object (specifically, each point in the first object) to obtain a final target image, so that the shielding relation between the first object and the second object can be accurately and effectively rendered, and a more real target image can be determined.
Referring to fig. 6, as shown in fig. 8, the image processing method provided in the embodiment of the present disclosure may further include S111 to S112.
S111, creating a mask buffer area corresponding to the size of the screen.
The mask buffer is used for storing source information of each distance stored in the depth buffer, and the source information of one distance is used for representing that the distance is from the first object or the second object.
It should be appreciated that when the source information for a distance indicates that the distance is from a first object, it is illustrated that a point in the first object (e.g., a first point) is closer to the screen than a point in a second object (e.g., a second point).
In the embodiment of the disclosure, the source information of a distance can be represented in different numbers. For example, the distance may be represented by a "0" from a first object and a "1" from a second object; the distance may also be indicated as "0" from a first object, as "255" from a second object, etc.
And S112, updating each color stored in the color buffer area to the color of each point in the first object.
It will be appreciated that the image processing apparatus updates each color stored in the color buffer to the color of each point in the first object, that is, first renders the first object (specifically, each point in the first object).
As further shown in fig. 8, in an implementation manner of the embodiment of the present disclosure, the updating the target color to the color of the point corresponding to the target value to obtain the target image corresponding to the image to be processed may specifically further include S1052.
S1052, when the source information of the target distance indicates that the target distance is from the second object, updating the target color to the color of the second point to obtain the target image.
In connection with the above description of the embodiments, it should be understood that the target distance is the minimum distance between the point in the depth buffer corresponding to the position of the first point and the screen. When the source information of the minimum distance indicates that the target distance is from the second object, the second distance is smaller than the first distance, i.e. the second point is closer to the screen than the first point, so that the image processing apparatus can update the target color to the color of the second point, in particular to the color of the first point.
Optionally, when the source information of the target distance indicates that the target distance is from the first object, since the color of each point in the first object has been updated (or plotted), i.e., has been stored in the color buffer, there is no need to update the color currently stored in the color buffer that corresponds to the location of the first point (i.e., the color of the first point).
The technical scheme provided by the embodiment at least has the following beneficial effects: as known from S111 to S112 and S1052, the image processing apparatus may create a mask buffer corresponding to the size of the screen for storing source information for each distance stored in the depth buffer; each color stored in the color buffer is then updated to the color of each point in the first object, and when the source information for the target distance indicates that the target distance is from the second object, the target color is updated to the color of the second point to obtain the target image. In the embodiment of the disclosure, the image processing apparatus may update each color stored in the color buffer to a color of each point in the first object, that is, first draw each point in the first object, and then when the source information of the target distance indicates that the target distance is from the second object, that is, when it is determined that the second distance is smaller than the first distance, draw the second object, that is, update the color of the first point (the target color at this time) to the color of the second point. The method and the device can accurately and effectively render the shielding relation between the first object and the second object, and can determine a more real target image.
It will be appreciated that, in actual implementation, the electronic device according to the embodiments of the present disclosure may include one or more hardware structures and/or software modules for implementing the corresponding image processing method, where the executing hardware structures and/or software modules may constitute an electronic device. Those of skill in the art will readily appreciate that the algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Based on such understanding, the embodiment of the present disclosure further correspondingly provides an image processing apparatus, and fig. 9 shows a schematic structural diagram of the image processing apparatus provided by the embodiment of the present disclosure. As shown in fig. 9, the image processing apparatus 10 may include an acquisition module 101, a processing module 102, and a determination module 103.
The acquiring module 101 is configured to acquire light field data in an image to be processed and geometric information of a first object of the image to be processed, where the light field data is used for representing ray information of an area where a second object of the image to be processed is located, and the first object and the second object have a shielding relationship.
The processing module 102 is configured to add geometric information corresponding to the light field data to the second object, so as to obtain the geometric information of the second object.
A determining module 103 configured to draw the first object and the second object according to the geometric information of the first object and the geometric information of the second object, so as to determine a distance between each point in the first object and the screen, and a distance between each point in the second object and the screen.
The determining module 103 is further configured to determine, for each point in the first object and each point in the second object, a target value, which is a minimum value of a distance between a first point and the screen, which is any one point in the first object, and a distance between a second point, which is a point in the second object corresponding to the first point, and the screen.
The processing module 102 is further configured to update a target color to a color of a point corresponding to the target value to obtain a target image corresponding to the image to be processed when the target value is smaller than a target distance, the target distance being a minimum distance between a point corresponding to the position of the first point in a depth buffer and the screen, the target color being a color of a point corresponding to the position of the first point in a color buffer, the depth buffer being used for storing the minimum distance between each current point of the image to be processed and the screen, the color buffer being used for storing the color of each current point of the image to be processed.
Optionally, the determining module 103 is specifically configured to process the light field data according to a preset algorithm, determining initial geometric information of the second object.
The processing module 102 is specifically configured to adjust the initial geometric information to obtain geometric information of the second object, where an area corresponding to the geometric information of the second object overlaps with an area corresponding to the light field data.
Optionally, the determining module 103 is specifically further configured to determine point cloud data for characterizing the initial geometric information based on ray information of an area where the second object is located and a multi-angle reconstruction point cloud technology.
Alternatively, the determining module 103 is specifically further configured to determine a geometry of the second object in three-dimensional space, determine a parameter for characterizing the geometry, and take the geometry characterized by the parameter as the initial geometry information.
Or, the determining module 103 is specifically further configured to obtain, in response to the configuration operation, a triangular mesh model corresponding to the second object, and determine geometric information included in the triangular mesh model as the initial geometric information.
Optionally, the determining module 103 is specifically further configured to determine that the three-dimensional vector of the second point in the geometric information of the second object satisfies the following formula:
P=M×P'+D
Wherein P represents a three-dimensional vector of the second point in the geometric information of the second object, M represents a three-dimensional rotation matrix corresponding to the second object, D represents a three-dimensional translation vector corresponding to the second object, and P' represents a three-dimensional vector of the second point in the initial geometric information.
Optionally, the image processing apparatus further comprises a creation module 104.
A creation module 104 configured to create a color buffer and a depth buffer corresponding to the size of the screen.
The processing module 102 is further configured to store the target distance in the depth buffer and the target color in the color buffer.
Optionally, the determining module 103 is further configured to determine that a second distance is smaller than the target distance, the second distance being a distance between the second point and the screen.
The processing module 102 is further configured to update the target distance to the second distance and update the target color to the color of the second point.
The processing module 102 is further configured to update the second distance to the first distance when the first distance is the minimum value, the first distance being a distance between the first point and the screen.
The processing module 102 is specifically configured to update the color of the second point to the color of the first point to obtain the target image.
Optionally, the creating module 104 is further configured to create a mask buffer corresponding to the size of the screen, the mask buffer being used to store source information for each distance stored in the depth buffer, wherein the source information for a distance is used to characterize that the distance is from the first object or the second object;
the processing module 102 is further configured to update each color stored in the color buffer to the color of each point in the first object.
The processing module 102 is specifically configured to update the target color to the color of the second point to obtain the target image when the source information of the target distance indicates that the target distance is from the second object.
As described above, the embodiments of the present disclosure may divide functional blocks of an image processing apparatus according to the above-described method example. The integrated modules may be implemented in hardware or in software functional modules. In addition, it should be further noted that the division of the modules in the embodiments of the present disclosure is merely a logic function division, and other division manners may be implemented in practice. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated in one processing module.
The specific manner in which each module performs the operation and the beneficial effects of the image processing apparatus in the foregoing embodiment are described in detail in the foregoing method embodiment, and are not described herein again.
Fig. 10 is a schematic structural view of another image processing apparatus provided by the present disclosure. As shown in fig. 10, the image processing apparatus 20 may include at least one processor 201 and a memory 203 for storing processor-executable instructions. Wherein the processor 201 is configured to execute instructions in the memory 203 to implement the image processing method in the above-described embodiment.
In addition, the image processing device 20 may also include a communication bus 202 and at least one communication interface 204.
The processor 201 may be a processor (central processing units, CPU), microprocessor unit, ASIC, or one or more integrated circuits for controlling the execution of programs in the presently disclosed aspects.
Communication bus 202 may include a path to transfer information between the above components.
The communication interface 204 uses any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 203 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and be connected to the processing unit by a bus. The memory may also be integrated with the processing unit.
Wherein the memory 203 is configured to store instructions for performing the aspects of the present disclosure and is controlled by the processor 201 for execution. The processor 201 is configured to execute instructions stored in the memory 203 to implement the functions in the methods of the present disclosure.
In a particular implementation, as one embodiment, processor 201 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 10.
In a specific implementation, as an embodiment, the image processing apparatus 20 may include a plurality of processors, such as the processor 201 and the processor 207 in fig. 10. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, as an embodiment, the image processing apparatus 20 may further include an output device 205 and an input device 206. The output device 205 communicates with the processor 201 and may display information in a variety of ways. For example, the output device 205 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 206 is in communication with the processor 201 and may accept user input in a variety of ways. For example, the input device 206 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is not limiting of the image processing apparatus 20 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In addition, the present disclosure also provides a computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the image processing method as provided by the above embodiments.
In addition, the present disclosure also provides a computer program product comprising instructions which, when executed by a processor, cause the processor to perform the image processing method as provided by the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (16)

1. An image processing method, comprising:
acquiring light field data in an image to be processed and geometric information of a first object of the image to be processed, wherein the light field data are used for representing light ray information of an area where a second object of the image to be processed is located, and the first object and the second object have a shielding relationship;
adding geometric information corresponding to the light field data for the second object to obtain the geometric information of the second object;
drawing the first object and the second object according to the geometric information of the first object and the geometric information of the second object so as to determine the distance between each point in the first object and the screen and the distance between each point in the second object and the screen;
determining a target value for each point in the first object and each point in the second object, wherein the target value is the minimum value of the distance between a first point and the screen and the distance between a second point and the screen, the first point is any point in the first object, and the second point is a point corresponding to the first point in the second object;
And when the target value is smaller than a target distance, updating a target color to be the color of a point corresponding to the target value so as to obtain a target image corresponding to the image to be processed, wherein the target distance is the minimum distance between a point corresponding to the position of the first point in a depth buffer zone and the screen, the target color is the color of a point corresponding to the position of the first point in a color buffer zone, the depth buffer zone is used for storing the minimum distance between each current point of the image to be processed and the screen, and the color buffer zone is used for storing the color of each current point of the image to be processed.
2. The image processing method according to claim 1, wherein adding geometric information corresponding to the light field data to the second object, to obtain the geometric information of the second object, includes:
processing the light field data according to a preset algorithm, and determining initial geometric information of the second object;
and adjusting the initial geometric information to obtain the geometric information of the second object, wherein the region corresponding to the geometric information of the second object is overlapped with the region corresponding to the light field data.
3. The image processing method according to claim 2, wherein the determining the initial geometric information of the second object includes:
determining point cloud data for representing the initial geometric information based on the ray information of the area where the second object is located and a multi-angle reconstruction point cloud technology;
or alternatively, the process may be performed,
determining the geometric shape of the second object in a three-dimensional space, determining parameters for characterizing the geometric shape, and taking the geometric shape characterized by the parameters as the initial geometric information;
or alternatively, the process may be performed,
and responding to configuration operation, acquiring a triangular mesh model corresponding to the second object, and determining geometric information included in the triangular mesh model as the initial geometric information.
4. The image processing method according to claim 2, wherein said adjusting the initial geometric information to obtain the geometric information of the second object includes:
determining that the three-dimensional vector of the second point in the geometric information of the second object satisfies the following formula:
P=M×P'+D
wherein P represents a three-dimensional vector of the second point in the geometric information of the second object, M represents a three-dimensional rotation matrix corresponding to the second object, D represents a three-dimensional translation vector corresponding to the second object, and P' represents a three-dimensional vector of the second point in the initial geometric information.
5. The image processing method according to any one of claims 1 to 4, characterized in that the image processing method further comprises:
creating a color buffer and a depth buffer corresponding to the size of the screen;
the target distance is stored in the depth buffer and the target color is stored in the color buffer.
6. The image processing method according to claim 5, characterized in that the image processing method further comprises:
determining that a second distance is less than the target distance, the second distance being a distance between the second point and the screen;
updating the target distance to the second distance and updating the target color to the color of the second point;
updating the second distance to the first distance when the first distance is the minimum value, wherein the first distance is the distance between the first point and the screen;
the updating the target color to the color of the point corresponding to the target value to obtain the target image corresponding to the image to be processed comprises the following steps:
updating the color of the second point to the color of the first point to obtain the target image.
7. The image processing method according to claim 5, characterized in that the image processing method further comprises:
creating a mask buffer corresponding to the size of the screen, the mask buffer for storing source information for each distance stored in the depth buffer, wherein source information for one distance is used to characterize the distance from the first object or the second object;
updating each color stored in the color buffer to the color of each point in the first object;
the updating the target color to the color of the point corresponding to the target value to obtain the target image corresponding to the image to be processed comprises the following steps:
when the source information of the target distance indicates that the target distance is from the second object, updating the target color to the color of the second point to obtain the target image.
8. An image processing device is characterized by comprising an acquisition module, a processing module and a determination module;
the acquisition module is configured to acquire light field data in an image to be processed and geometric information of a first object of the image to be processed, wherein the light field data is used for representing light ray information of an area where a second object of the image to be processed is located, and the first object and the second object have a shielding relationship;
The processing module is configured to add geometric information corresponding to the light field data to the second object to obtain the geometric information of the second object;
the determining module is configured to draw the first object and the second object according to the geometric information of the first object and the geometric information of the second object so as to determine the distance between each point in the first object and the screen and the distance between each point in the second object and the screen;
the determining module is further configured to determine, for each point in the first object and each point in the second object, a target value, the target value being a minimum value of a distance between a first point and the screen and a distance between a second point and the screen, the first point being any one point in the first object, the second point being a point in the second object corresponding to the first point;
the processing module is further configured to update a target color to a color of a point corresponding to the target value when the target value is smaller than a target distance, so as to obtain a target image corresponding to the image to be processed, wherein the target distance is a minimum distance between a point corresponding to the position of the first point in a depth buffer and the screen, the target color is a color of a point corresponding to the position of the first point in a color buffer, the depth buffer is used for storing the minimum distance between each current point of the image to be processed and the screen, and the color buffer is used for storing the color of each current point of the image to be processed.
9. The image processing apparatus according to claim 8, wherein,
the determining module is specifically configured to process the light field data according to a preset algorithm and determine initial geometric information of the second object;
the processing module is specifically configured to adjust the initial geometric information to obtain geometric information of the second object, where an area corresponding to the geometric information of the second object overlaps with an area corresponding to the light field data.
10. The image processing apparatus according to claim 9, wherein,
the determining module is specifically configured to determine point cloud data for representing the initial geometric information based on ray information of an area where the second object is located and a multi-angle reconstruction point cloud technology;
or alternatively, the process may be performed,
the determining module is specifically configured to determine a geometric shape of the second object in a three-dimensional space, determine a parameter for characterizing the geometric shape, and take the geometric shape characterized by the parameter as the initial geometric information;
or alternatively, the process may be performed,
the determining module is specifically further configured to obtain a triangular mesh model corresponding to the second object in response to a configuration operation, and determine geometric information included in the triangular mesh model as the initial geometric information.
11. The image processing apparatus according to claim 9, wherein,
the determining module is specifically further configured to determine that the three-dimensional vector of the second point in the geometric information of the second object satisfies the following formula:
P=M×P'+D
wherein P represents a three-dimensional vector of the second point in the geometric information of the second object, M represents a three-dimensional rotation matrix corresponding to the second object, D represents a three-dimensional translation vector corresponding to the second object, and P' represents a three-dimensional vector of the second point in the initial geometric information.
12. The image processing apparatus according to any one of claims 8 to 11, further comprising a creation module;
the creation module is configured to create a color buffer and a depth buffer corresponding to the size of the screen;
the processing module is further configured to store the target distance in the depth buffer and store the target color in the color buffer.
13. The image processing apparatus according to claim 12, wherein,
the determining module is further configured to determine that a second distance is less than the target distance, the second distance being a distance between the second point and the screen;
The processing module is further configured to update the target distance to the second distance and update the target color to the color of the second point;
the processing module is further configured to update the second distance to the first distance when the first distance is the minimum value, the first distance being a distance between the first point and the screen;
the processing module is specifically configured to update the color of the second point to the color of the first point to obtain the target image.
14. The image processing apparatus according to claim 12, wherein,
the creating module is further configured to create a mask buffer corresponding to the size of the screen, the mask buffer being used to store source information for each distance stored in the depth buffer, wherein source information for one distance is used to characterize the distance from the first object or the second object;
the processing module is further configured to update each color stored in the color buffer to a color of each point in the first object;
the processing module is specifically configured to update the target color to the color of the second point to obtain the target image when the source information of the target distance indicates that the target distance is from the second object.
15. An electronic device, the electronic device comprising:
a processor;
a memory configured to store the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any of claims 1-7.
16. A computer readable storage medium having instructions stored thereon, which, when executed by an electronic device, cause the electronic device to perform the image processing method of any of claims 1-7.
CN202110875505.2A 2021-07-30 2021-07-30 Image processing method and device, electronic equipment and storage medium Active CN113436325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110875505.2A CN113436325B (en) 2021-07-30 2021-07-30 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110875505.2A CN113436325B (en) 2021-07-30 2021-07-30 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113436325A CN113436325A (en) 2021-09-24
CN113436325B true CN113436325B (en) 2023-07-28

Family

ID=77762531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110875505.2A Active CN113436325B (en) 2021-07-30 2021-07-30 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113436325B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106228507A (en) * 2016-07-11 2016-12-14 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field
CN106651943A (en) * 2016-12-30 2017-05-10 杭州电子科技大学 Occlusion geometric complementary model-based light field camera depth estimation method
CN108765542A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image rendering method, electronic equipment and computer readable storage medium
CN109523622A (en) * 2018-11-15 2019-03-26 奥本未来(北京)科技有限责任公司 A kind of non-structured light field rendering method
CN110009720A (en) * 2019-04-02 2019-07-12 百度在线网络技术(北京)有限公司 Image processing method, device, electronic equipment and storage medium in AR scene
CN110349246A (en) * 2019-07-17 2019-10-18 广西师范大学 A method of applied to the reconstruct distortion factor for reducing viewpoint in light field drafting
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111739161A (en) * 2020-07-23 2020-10-02 之江实验室 Human body three-dimensional reconstruction method and device under shielding condition and electronic equipment
CN112541960A (en) * 2019-09-19 2021-03-23 阿里巴巴集团控股有限公司 Three-dimensional scene rendering method and device and electronic equipment
CN112819726A (en) * 2021-02-09 2021-05-18 嘉兴丰鸟科技有限公司 Light field rendering artifact removing method
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106228507A (en) * 2016-07-11 2016-12-14 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field
CN106651943A (en) * 2016-12-30 2017-05-10 杭州电子科技大学 Occlusion geometric complementary model-based light field camera depth estimation method
CN108765542A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image rendering method, electronic equipment and computer readable storage medium
CN109523622A (en) * 2018-11-15 2019-03-26 奥本未来(北京)科技有限责任公司 A kind of non-structured light field rendering method
CN110009720A (en) * 2019-04-02 2019-07-12 百度在线网络技术(北京)有限公司 Image processing method, device, electronic equipment and storage medium in AR scene
CN110349246A (en) * 2019-07-17 2019-10-18 广西师范大学 A method of applied to the reconstruct distortion factor for reducing viewpoint in light field drafting
CN112541960A (en) * 2019-09-19 2021-03-23 阿里巴巴集团控股有限公司 Three-dimensional scene rendering method and device and electronic equipment
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111739161A (en) * 2020-07-23 2020-10-02 之江实验室 Human body three-dimensional reconstruction method and device under shielding condition and electronic equipment
CN112819726A (en) * 2021-02-09 2021-05-18 嘉兴丰鸟科技有限公司 Light field rendering artifact removing method
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device

Also Published As

Publication number Publication date
CN113436325A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN111815755B (en) Method and device for determining blocked area of virtual object and terminal equipment
CN110544289B (en) Utilizing inter-frame coherence in a mid-ordering architecture
US9384522B2 (en) Reordering of command streams for graphical processing units (GPUs)
US8922565B2 (en) System and method for using a secondary processor in a graphics system
US10242481B2 (en) Visibility-based state updates in graphical processing units
US11908039B2 (en) Graphics rendering method and apparatus, and computer-readable storage medium
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
US20210343072A1 (en) Shader binding management in ray tracing
WO2022121653A1 (en) Transparency determination method and apparatus, electronic device, and storage medium
JP7262530B2 (en) Location information generation method, related device and computer program product
CN114792355A (en) Virtual image generation method and device, electronic equipment and storage medium
WO2019088865A1 (en) Method and system for removing hidden surfaces from a three-dimensional scene
CN109377552B (en) Image occlusion calculating method, device, calculating equipment and storage medium
CN109598672B (en) Map road rendering method and device
WO2023231926A1 (en) Image processing method and apparatus, device, and storage medium
CN113436325B (en) Image processing method and device, electronic equipment and storage medium
KR102225281B1 (en) Techniques for reduced pixel shading
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN114549303B (en) Image display method, image processing method, image display device, image processing apparatus, image display device, image processing program, and storage medium
CN114677469A (en) Method and device for rendering target image, electronic equipment and storage medium
CN114020390A (en) BIM model display method and device, computer equipment and storage medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
US11869123B2 (en) Anti-aliasing two-dimensional vector graphics using a compressed vertex buffer
US20230410425A1 (en) Real-time rendering of image content generated using implicit rendering
JP5669199B2 (en) Image drawing apparatus, image drawing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant