CN112884909B - AR special effect display method and device, computer equipment and storage medium - Google Patents

AR special effect display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112884909B
CN112884909B CN202110203047.8A CN202110203047A CN112884909B CN 112884909 B CN112884909 B CN 112884909B CN 202110203047 A CN202110203047 A CN 202110203047A CN 112884909 B CN112884909 B CN 112884909B
Authority
CN
China
Prior art keywords
frame image
video frame
target
information
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110203047.8A
Other languages
Chinese (zh)
Other versions
CN112884909A (en
Inventor
李宇飞
张建博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202110203047.8A priority Critical patent/CN112884909B/en
Publication of CN112884909A publication Critical patent/CN112884909A/en
Application granted granted Critical
Publication of CN112884909B publication Critical patent/CN112884909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Circuits (AREA)

Abstract

The disclosure provides a display method, a device, a computer device and a storage medium for AR special effects, wherein the method comprises the following steps: acquiring a video frame image obtained by the AR equipment acquiring a target scene; determining a target AR special effect based on the video frame image, and determining environmental information of the target scene based on the video frame image; and processing the video frame image corresponding to the environment information based on the environment information, and displaying the processed video frame image and the target AR special effect through the AR equipment. According to the embodiment of the disclosure, the video frame image is processed, so that the video frame image is more suitable for displaying the AR special effect after being processed, and the display effect of the AR special effect is improved.

Description

AR special effect display method and device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of augmented reality (Augmented Reality, AR), in particular to a display method, a device, a computer device and a storage medium for an AR special effect.
Background
The augmented reality (Augmented Reality, AR) technology is a technology of skillfully fusing virtual information with a real world, specifically, the AR technology can calculate pose information of AR equipment in real time, determine AR special effects based on the pose information of the AR equipment, and then superimpose the corresponding AR special effects on the front end of a picture shot by the AR equipment, so as to display the picture.
At present, how to improve the display effect of the AR special effect in the AR equipment is an important subject in the industry.
Disclosure of Invention
The embodiment of the disclosure at least provides a display method, a device, computer equipment and a storage medium for AR special effects.
In a first aspect, an embodiment of the present disclosure provides a display method of an AR special effect, where the display method includes: acquiring a video frame image obtained by the AR equipment acquiring a target scene; determining a target AR special effect based on the video frame image, and determining environmental information of the target scene based on the video frame image; and processing the video frame image corresponding to the environment information based on the environment information, and displaying the processed video frame image and the target AR special effect through the AR equipment.
According to the embodiment of the disclosure, the video frame image obtained by the AR equipment collecting the target scene is obtained, the target AR special effect is determined based on the video frame image, and the environment information of the target scene is determined based on the video frame image; and then processing the video frame image corresponding to the environmental information through the AR equipment based on the environmental information, and carrying out fusion display on the processed video frame image and the target AR special effects, so that the image can be adjusted based on the original effect of the video frame image, and the display effect of the AR special effect fusion image is improved.
In an alternative embodiment, the video frame image is at least one frame image in a video stream; processing the video frame image corresponding to the environmental information, and displaying the processed video frame image and the target AR special effect through the AR equipment, wherein the processing comprises the following steps: and processing at least part of images in the video stream corresponding to the environment information, and displaying the processed at least part of images and the target AR special effect through the AR equipment.
Therefore, the display effect of the fusion of the video and the AR special effect can be improved by processing at least part of the images in the video stream corresponding to the environment information.
In an alternative embodiment, the environmental information includes at least one of: time information, brightness information, scene information, and weather information.
Therefore, whether the AR special effect can be effectively determined and displayed in fusion with the image is determined through at least one of time information, brightness information, scene information and weather information.
In an alternative embodiment, for a case that the environmental information includes time information, the determining environmental information of the target scene based on the video frame image includes: reading system time, and determining time information of the target scene based on the system time; and/or, for a case where the environmental information includes luminance information, the determining environmental information of the target scene based on the video frame image includes: determining brightness information of the target scene based on pixel values of all pixel points in the video frame image; and/or, for a case where the environmental information includes scene information, the determining environmental information of the target scene based on the video frame image includes: performing scene detection processing on the video frame image by utilizing a pre-trained scene detection model to obtain scene information of the target scene; and/or, for a case where the environmental information includes weather information, the determining environmental information of the target scene based on the video frame image includes: performing weather detection processing on the video frame image by using a pre-trained weather detection model to obtain weather information of the target scene; or obtaining the geographic position information of the AR equipment; and acquiring weather information of the target scene based on the geographic position information.
In this way, through the above-described different processes, the environmental information of the video frame image can be obtained.
In an alternative embodiment, the determining the target AR special effect based on the video frame image includes: determining first pose information of the AR equipment in the target scene based on the video frame image; the target AR effect is determined from the at least one AR effect based on the first pose information and second pose information of the at least one AR effect in the target scene.
In this way, it is possible to determine a target AR effect to be displayed in the AR device from among a plurality of AR effects in the target scene.
In an optional embodiment, the determining, based on the video frame image, first pose information of the AR device in the target scene includes: performing key point identification on the video frame image to obtain a first key point in the video frame image; and determining a target second key point matched with the first key point from second key points of a three-dimensional model of the target scene based on the first key point, and determining first pose information of the AR equipment under the scene coordinate system based on three-dimensional coordinate values of the target second key point under the scene coordinate system.
Thus, the first pose information of the AR equipment under the scene coordinate system can be accurately determined.
In an alternative embodiment, the determining the first pose information of the AR device in the scene coordinate system based on the three-dimensional coordinate value of the target second key point in the scene coordinate system includes: and determining first pose information of the AR equipment in the scene coordinate system based on the two-dimensional coordinate value of the first key point in the image coordinate system and the three-dimensional coordinate value of the second key point of the target in the scene coordinate system.
In an alternative embodiment, the method further comprises: determining target environment information corresponding to the target AR special effect; the processing of the video frame image corresponding to the environment information based on the environment information includes: and processing the video frame image based on the environment information and the target environment information.
Therefore, the video frame image can be adjusted according to the environmental information required by different AR special effects, and the effect is better when the video frame image and the AR special effects are displayed in a fusion mode.
In an alternative embodiment, the processing corresponding to the environmental information is performed on the video frame image, including: performing at least one of the following on the video frame image: brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
In a second aspect, embodiments of the present disclosure further provide a display device for AR special effects, the display device including: the acquisition module is used for acquiring a video frame image obtained by the AR equipment for acquiring a target scene; the first determining module is used for determining a target AR special effect based on the video frame image and determining environment information of the target scene based on the video frame image; and the processing module is used for processing the video frame image corresponding to the environment information based on the environment information and displaying the processed video frame image and the target AR special effect through the AR equipment.
In an alternative embodiment, the video frame image is at least one frame image in a video stream; the processing module is used for processing the video frame image corresponding to the environment information and displaying the processed video frame image and the target AR special effect through the AR equipment, and is used for: and processing at least part of images in the video stream corresponding to the environment information, and displaying the processed at least part of images and the target AR special effect through the AR equipment.
In an alternative embodiment, the environmental information includes at least one of: time information, brightness information, scene information, and weather information.
In an optional implementation manner, for a case that the environmental information includes time information, the first determining module is specifically configured to: reading system time, and determining time information of the target scene based on the system time; and/or, for a case where the environmental information includes luminance information, the determining environmental information of the target scene based on the video frame image includes: determining brightness information of the target scene based on pixel values of all pixel points in the video frame image; and/or, for a case where the environmental information includes scene information, the determining environmental information of the target scene based on the video frame image includes: performing scene detection processing on the video frame image by utilizing a pre-trained scene detection model to obtain scene information of the target scene; and/or, for a case where the environmental information includes weather information, the determining environmental information of the target scene based on the video frame image includes: performing weather detection processing on the video frame image by using a pre-trained weather detection model to obtain weather information of the target scene; or obtaining the geographic position information of the AR equipment; and acquiring weather information of the target scene based on the geographic position information.
In an alternative embodiment, the first determining module includes: a first determining unit, configured to determine first pose information of the AR device in the target scene based on the video frame image; and a second determining unit configured to determine the target AR effect from the at least one AR effect based on the first pose information and second pose information of the at least one AR effect in the target scene.
In an alternative embodiment, the first determining unit is specifically configured to: performing key point identification on the video frame image to obtain a first key point in the video frame image; and determining a target second key point matched with the first key point from second key points of a three-dimensional model of the target scene based on the first key point, and determining first pose information of the AR equipment under the scene coordinate system based on three-dimensional coordinate values of the target second key point under the scene coordinate system.
In an alternative embodiment, the first determining unit is further configured to: and determining first pose information of the AR equipment in the scene coordinate system based on the two-dimensional coordinate value of the first key point in the image coordinate system and the three-dimensional coordinate value of the second key point of the target in the scene coordinate system.
In an alternative embodiment, the method further comprises: the second determining module is used for determining target environment information corresponding to the target AR special effect; the processing module is specifically configured to: and processing the video frame image based on the environment information and the target environment information.
In an alternative embodiment, the processing module is further configured to: performing at least one of the following on the video frame image: brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
In a third aspect, an optional implementation manner of the disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, where the machine-readable instructions, when executed by the processor, perform the steps in the first aspect, or any possible implementation manner of the first aspect, when executed by the processor.
In a fourth aspect, an alternative implementation of the present disclosure further provides a computer readable storage medium having stored thereon a computer program which when executed performs the steps of the first aspect, or any of the possible implementation manners of the first aspect.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 shows a flowchart of a method for displaying AR special effects according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an AR special effect display device according to an embodiment of the present disclosure;
fig. 3 is a specific schematic diagram of a first determining module in the AR special effect display device provided in the embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another AR effect display device provided by embodiments of the present disclosure;
fig. 5 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
According to research, the AR equipment is insufficient, so that the problems of low definition, insufficient brightness and the like of the image shot by the AR equipment can be solved, and the problem of poor display effect is caused when the AR special effect is displayed in the AR equipment. In addition, certain AR effects require a specific environment when displayed, and images acquired through AR devices cannot meet the display environment of the AR effects in many cases, which also causes a problem of poor AR effect display.
Based on the above-mentioned research, the embodiment of the disclosure provides a display method of an AR special effect, by processing an image, the processed environment of the image can be more suitable for displaying the AR special effect, and the display effect of the AR special effect is improved.
The defects of the scheme are all results obtained by the inventor after practice and careful study, and therefore, the discovery process of the above problems and the solutions to the above problems set forth hereinafter by the present disclosure should be all contributions of the inventors to the present disclosure during the course of the present disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a method for displaying an AR special effect disclosed in the present embodiment, where an execution subject of the method for displaying an AR special effect provided in the present embodiment is generally a computer device with a certain computing capability, where the computer device includes, for example: the AR device or server or other processing device may be a User Equipment (UE), mobile device, user terminal, cellular phone, cordless phone, personal digital assistant (Personal DIGITAL ASSISTANT, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the method for displaying the AR special effects may be implemented by a processor invoking computer readable instructions stored in a memory.
The following describes a method for displaying AR special effects provided in the embodiments of the present disclosure.
Referring to fig. 1, a flowchart of a method for displaying AR special effects according to an embodiment of the present disclosure is shown, where the method includes steps S101 to S103, where:
S101: acquiring a video frame image obtained by the AR equipment acquiring a target scene;
S102: determining a target AR special effect based on the video frame image, and determining environmental information of the target scene based on the video frame image;
S103: and processing the video frame image corresponding to the environment information based on the environment information, and displaying the processed video frame image and the target AR special effect through the AR equipment.
According to the embodiment of the disclosure, the video frame image obtained by acquiring the target scene by the AR equipment is acquired, the target AR special effect is determined based on the video frame image, the environment information of the target scene is determined based on the video frame image, the processing corresponding to the environment information is carried out on the video frame image based on the environment information, the processed video frame image and the target AR special effect are displayed by the AR equipment, and the video frame image is processed, so that the video frame image is more suitable for displaying the AR special effect after being processed, and the display effect of the AR special effect is improved.
The following describes the above-mentioned steps S101 to S103 in detail.
1. In S101, the video frame image is, for example, at least one frame image taken from a video stream acquired from the AR device.
Wherein, the video stream can be one or more, for example, a plurality of video streams corresponding to different angles can be generated aiming at different shooting angles; or multiple video streams corresponding to different fields of view may be generated for different fields of view.
The video frame images may be a plurality of video frame images taken from a plurality of video streams, or may be video frame images taken for different times or target requirements. For example: for a target video stream, one frame of image may be truncated every 1 second, 1 minute as a video frame image. Or upon detecting a change in the captured video, for example: detecting that the shooting environment changes, such as brightness and ambiguity change greatly; or a change in the subject is detected, for example: when obvious changes of gender, height, clothes and the like corresponding to the shooting objects in the video stream are detected, a frame of image is intercepted to be used as a video frame image.
In the embodiment of the present disclosure, when the AR device acquires the video stream, the AR device may use a camera set on the AR device to perform acquisition, or may use another set camera to perform acquisition. When the video stream is acquired by using the camera which is additionally arranged, the camera is in communication connection with the AR equipment, and the acquired video stream can be sent to the AR equipment.
Aiming at the situation that the display method of the AR special effect provided by the embodiment of the disclosure is executed in the AR equipment, after the AR equipment acquires the video stream, the video stream is sampled to obtain a video frame image; aiming at the situation that the display method of the AR special effect provided by the embodiment of the disclosure is executed by the server, the AR equipment acquires the video stream, samples the video stream to obtain video frame images, and then sends the sampled video frame images to the server. The server receives the video frame image transmitted by the AR device and performs subsequent processing.
2. In S102 described above, the environmental information of the target scene determined based on the video frame image includes, for example, at least one of: time information, brightness information, scene information, and weather information. Exemplary:
(1) For the case that the environmental information includes time information, the determining environmental information of the target scene based on the video frame image includes: and reading system time, and determining time information of the target scene based on the system time.
Specifically, if time information is involved in the whole display process, the system time can be read, and the read system time can be reduced to serve as the time information of the target scene. For example, after the system time is read and the time information of the target scene is determined, aiming at the situation that the environment required by the target AR effect in display is not matched with the time information of the target scene, for example, when the target AR effect is fireworks, the environment to be displayed is generally at night or the light is dim; if the time information of the target scene indicates that the target scene is in noon, the brightness of the video frame image obtained under the normal illumination condition is usually higher and is not suitable for displaying fireworks, the brightness of the video frame image is correspondingly reduced, so that the fireworks can be more obviously displayed in the graphical user interface of the AR equipment.
Here, the corresponding luminance information may be determined based on the acquired time information, for example. For example, if the current time is 10:00 am, the current corresponding brightness may be determined to be a higher value, and if the current time is 10:00 pm, the current corresponding brightness may be determined to be a lower value.
(2) For the case that the environmental information includes luminance information, the determining environmental information of the target scene based on the video frame image includes: and determining the brightness information of the target scene based on the pixel values of all pixel points in the video frame image.
Specifically, the brightness of each pixel point on the video frame image can be obtained; determining the average value of the brightness of each pixel point as the brightness information of the target scene; or dividing the video frame image into a plurality of image areas, and acquiring brightness information of each image area; and determining the weighted average value of the brightness information of each image area as the brightness information of the target scene. Among the two methods for acquiring the brightness information of the video frame image, the former method focuses on the overall brightness of the image, and correspondingly, the latter method focuses on the brightness of the key area of the image, and can be selected according to practical conditions.
(3) For the case that the environmental information includes scene information, the determining environmental information of the target scene based on the video frame image includes: and performing scene detection processing on the video frame image by utilizing a pre-trained scene detection model to obtain scene information of the target scene.
The scene detection model is obtained by training samples of a plurality of different scenes. The training samples may include training samples for different scenes corresponding to different brightness for various periods of time and various weather conditions.
After the training of the scene detection model is completed, the acquired video frame image may be input into the scene detection model to perform scene detection processing, so that the scene information of the target scene corresponding to the video frame image may be obtained, and the AR special effect of the corresponding scene may be displayed.
Here, the specific content of the scene information may be set according to actual needs, which is not described herein.
(4) For the case that the environmental information includes weather information, the determining environmental information of the target scene based on the video frame image includes: performing weather detection processing on the video frame image by using a pre-trained weather detection model to obtain weather information of the target scene; or obtaining the geographic position information of the AR equipment; and acquiring weather information of the target scene based on the geographic position information.
In one possible embodiment, the scene detection model is trained using a plurality of samples of different weather. The training samples may include training samples for different weather for each time period.
For example, after the weather detection model training is completed, the acquired video frame image may be input into the weather detection model to perform weather detection processing, so as to obtain weather information of the target scene corresponding to the video frame image.
In another possible embodiment, the corresponding weather information may be obtained based on the geographic location where the device is located, such as latitude and longitude or a natural geographic area, and the current point in time.
For example, a small processor may be disposed in the AR device, and the processor may send and acquire a signal, for example, the AR device sends an instruction for acquiring weather information and the geographical information where the instruction is located, and then may receive the weather information returned by the cloud server or the terminal server. After determining the weather information, if the environmental information cannot be matched with the AR special effect to be displayed currently, for example, if the AR special effect to be displayed is a raindrop, if the detected weather information is clear, the AR special effect and the video frame image are displayed in a fused manner, so that the effect is not real, at this time, the video frame image can be processed, for example, the brightness of the video frame image is reduced, a certain degree of blurring processing is performed on the video frame image, and the like, so that the display effect is more real when the processed video frame image and the AR special effect are displayed in a fused manner.
The target AR effect refers to an AR effect to be displayed in the AR device; generally, AR special effects may be set for different positions of a target scene, respectively; when the video frame image acquired by the AR equipment comprises the position corresponding to the AR special effect, the AR special effect at the corresponding position is taken as the target AR special effect.
In the embodiments of the present disclosure, the target AR special effects may be determined, for example, in the following manner:
and determining first pose information of the AR equipment in the target scene based on the video frame image. The target AR effect is determined from the at least one AR effect based on the first pose information and second pose information of the at least one AR effect in the target scene.
In a specific implementation, the first pose information includes a first three-dimensional coordinate value of an optical center of an image acquisition device disposed on the AR device in the scene coordinate system, and optical axis orientation information of the image acquisition device; the optical axis orientation information may include, for example: the deflection angle and the pitch angle of an optical axis of the image acquisition device in a scene coordinate system established based on a target scene; or the optical axis orientation information is, for example, a vector in the scene coordinate system.
In a possible implementation manner, when determining first pose information of the AR device under a scene coordinate system corresponding to the target scene based on a video frame image acquired by the AR device, key point identification may be performed on the video frame image to obtain a first key point in the video frame image; and determining a target second key point matched with the first key point from second key points in a three-dimensional model established based on a target scene based on the first key point, and determining first pose information of the AR equipment under the scene coordinate system based on three-dimensional coordinate values of the target second key point under the scene coordinate system.
In a specific implementation, the first key point includes, for example, at least one of: key points of contour information representing the contour of an object, key points of color block information representing the surface of the object and key points of texture change representing the surface of the object.
After the first key point in the video frame image is obtained, the first key point is matched with a second key point in a three-dimensional model of a pre-constructed target scene, and a target second key point which can be matched with the first key point is determined from the second key point. At this time, the object represented by the second key point of the target is the same object as the object represented by the first key point. And the three-dimensional coordinate value of the second target key point in the three-dimensional model is the three-dimensional coordinate value of the first key point in the three-dimensional model.
Here, the three-dimensional model of the target scene may be obtained by any one of the following methods, for example: synchronous localization and mapping (Simultaneous Localization AND MAPPING, SLAM) modeling, and Motion-restoration-Structure (SFM) modeling.
For example, when a three-dimensional model of a target scene is constructed, a three-dimensional coordinate system is established by taking a preset coordinate point as an origin; the preset coordinate point may be a building coordinate point in the target scene or a coordinate point where the camera device is located when the camera collects the target scene.
The camera acquires video images, and a three-dimensional model of a target scene is built by tracking a sufficient number of key points in a video frame of the camera; the key points in the three-dimensional model of the constructed target scene also comprise the key point information of the object, namely the second key point.
And matching the first key points with a sufficient number of second key points in the three-dimensional model of the target scene, determining target second key points, and reading three-dimensional coordinate values (x 1,y1,z1) of the target second key points in the three-dimensional model of the target scene. Then, based on the three-dimensional coordinate value of the second key point of the target, determining first pose information of the AR equipment under a scene coordinate system.
Specifically, when determining the first pose information of the AR device in the scene coordinate system based on the three-dimensional coordinate value of the target second key point, for example, using a camera imaging principle, the first pose information of the AR device in the three-dimensional model is recovered according to the three-dimensional coordinate value of the target second key point in the three-dimensional model.
Here, when the first pose information of the AR device in the three-dimensional model is restored by using the camera imaging principle, for example, a target pixel point corresponding to the first key point in the video frame image may be determined; and determining first pose information of the AR equipment in the scene coordinate system based on the two-dimensional coordinate value of the target pixel point in the image coordinate system and the three-dimensional coordinate value of the target second key point in the scene coordinate system.
Specifically, an AR device may be utilized to construct a camera coordinate system; the origin of the camera coordinate system is the point of the optical center of the image acquisition device in the AR device; the z-axis is a straight line where the optical axis of the image acquisition device is located; the plane perpendicular to the optical axis and in which the optical center is located is the plane in which the x-axis and the y-axis are located; depth detection algorithm can be utilized to determine depth values corresponding to each pixel point in the video frame image; after the target pixel point is determined in the video frame image, the depth value h of the target pixel point under the camera coordinate system can be obtained; namely, the three-dimensional coordinate value of the first key point corresponding to the target pixel point under the camera coordinate system can be obtained; and then, recovering the coordinate value of the origin of the camera coordinate system in the scene coordinate system by utilizing the three-dimensional coordinate value of the first key point in the camera coordinate system and the three-dimensional coordinate value of the first key point in the scene coordinate system, namely, the position information of the AR equipment in the first pose information in the scene coordinate system, and determining the angles of the z axis in the scene coordinate system relative to all coordinate axes of the scene coordinate system by utilizing the z axis of the camera coordinate system to obtain the pose information of the AR equipment in the first pose information in the scene coordinate system.
For example, the three-dimensional coordinate value of the target pixel point in the camera coordinate system is expressed as (x 2,y2, h).
And determining first pose information of the AR equipment in the scene coordinate system according to a mapping relation (x 1,y1,z1)→(x2,y2, h) based on the obtained three-dimensional coordinate value (x 1,y1,z1) of the second key point of the target in the scene coordinate system and the determined three-dimensional coordinate value (x 2,y2, h) of the target pixel point in the camera coordinate system.
In another possible implementation manner, when determining the first pose information of the AR device under the scene coordinate system corresponding to the target scene based on the video frame image acquired by the AR device, the first pose information of the AR device may be determined by first performing scene key point recognition on the video frame image, determining a target pixel point corresponding to at least one scene key point in the video frame image, performing depth value prediction on the video frame image, determining depth values corresponding to each pixel point in the video frame image, and then determining the first pose information of the AR device based on the depth values corresponding to the target pixel point.
After determining the first pose information of the AR device in the scene coordinate system, for example, the target AR effect may be determined based on the first pose information in the scene coordinate system and the second pose information of the AR effect in the target scene in the following manner.
Illustratively, the target scene will include a plurality of objects therein; for at least part of objects in the target scene, an AR special effect corresponding to the objects in the target scene can be preset; the object includes, for example, at least one of: scenic spots, museum exhibits, functional buildings, etc.
In a specific implementation, the target object refers to at least part of the objects contained in the video frame image acquired by the AR device. The target object may be obtained, for example, in the following manner: and carrying out key point identification on the video frame image to obtain a first key point, determining a target second key point matched with the first key point from second key points of the three-dimensional model of the target scene based on the first key point, and determining an object to which the second key point belongs as a target object.
Then, based on the target object and the association relationship between the target object and the AR special effects, the AR special effects associated with the target object are determined.
In one possible implementation, all AR effects associated with a target object may be taken as target AR effects.
In another possible implementation, the target AR effect may be determined from all AR effect species associated with the target object based on certain screening criteria.
For example, when an AR effect is set for the object, a second pose information within the target scene may be determined for the AR effect.
The second pose information of the AR effect in the target scene is, for example, second pose information of the AR effect in the scene coordinate system, where the second pose information includes, for example: the AR effect is in a scene coordinate system, and the three-dimensional coordinate value of the AR effect and the gesture of the AR effect are in the scene coordinate system.
And after the target object is determined, determining the target AR special effect based on the first pose information and the second pose information of the AR special effect corresponding to the target object under the scene coordinate system.
In another embodiment of the present disclosure, after determining the target AR effect, third pose information of the target AR effect with respect to the AR device may also be determined.
The third pose information is used for determining the display position and the display pose of the AR equipment for displaying the target AR special effect. Therefore, the AR equipment can display the target AR special effect based on the third pose information, and the AR special effect can have a richer display form.
For S104, when the target object of the video frame image and the corresponding target AR effect are displayed in association by the AR device, the method for displaying the AR effect provided by the embodiment of the present disclosure sends the target AR effect to the AR device under the condition that the method is executed at the server side, so that the AR device displays the target object and the corresponding target AR effect in association. In the case of executing the AR device, the AR device directly associates and displays the target AR special effect and the target object after determining the target AR special effect.
Taking the example of performing display of an AR special effect in an AR device, the AR device can identify a target object from an acquired video frame image, a position in the video frame image; based on the position, the corresponding target AR special effect and the target object are displayed in a related mode.
In another embodiment of the present disclosure, determining the target environment information corresponding to the target AR special effect is further included.
For example, when adding an AR effect to a target scene, target environment information may be preconfigured for the AR effect, for example; for example, if the AR special effect is a firework, the target environmental information set for the AR special effect is, for example: the brightness is lower than a preset brightness threshold; for example, the AR effect is snowflake, and the target environmental information set for the AR effect is, for example: the luminance is below a preset luminance threshold and the ambiguity is greater than a preset ambiguity threshold.
The target environment information added for the AR effect may be added to each AR effect in the form of a tag, for example, and the tag and the AR effect are stored in association. When any AR effect is determined to be the target AR effect, the tag stored in association with the target AR effect can be read to obtain target environment information of the target AR effect.
After the target environment information is obtained, matching the environment information determined from the video frame image with the target environment information; under the condition that the two images are inconsistent, the environment information of the adjusted video frame image can be kept consistent or close to the target environment information by adjusting the video frame image, so that the adjusted video frame image can be more suitable for displaying the target AR special effect, and the display effect is improved.
Thirdly, in S103, the video frame image is processed corresponding to the environmental information, for example, the video frame image may be processed only, so as to obtain an image with better AR special effect and video frame image fusion effect; and processing at least part of the images in the video stream corresponding to the environment information, and displaying the processed at least part of the images and the target AR special effects through the AR equipment.
Here, at least part of the image to be processed may be determined, for example, in the following manner:
The partial image of the video stream, which is later in time than the current video frame image and earlier in time than the next video frame image, is treated as at least a partial image to be processed.
Processing at least part of the images in the video stream corresponding to the environment information comprises the following steps: performing at least one of the following on the at least partial images in the video stream: brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
For example, taking the brightness adjustment process as an example, if the determined target AR effect is an AR effect such as fireworks, lightning, snowflakes, etc., the required environmental information is usually low in brightness. If the brightness of the video frame image is high, the brightness of the image can be reduced in order to make the AR characteristic more obvious.
Taking hue adjustment processing as an example, if the determined target AR special effect is a special effect of a red-bias system such as flame and sunset, and the corresponding image is a cold-bias system image, in order to make the AR special effect better fused with the environment, hue processing needs to be performed on the image, and the hue of the image in the video stream changes toward a direction of leaning to a warm color, so that the AR special effect can be displayed according to reality.
The saturation adjustment process and the sharpness adjustment process are both for improving the final display effect, and are not described herein.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides an AR special effect display device corresponding to the AR special effect display method, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the AR special effect display method in the embodiment of the present disclosure, implementation of the device may refer to implementation of the method, and repeated parts will not be repeated.
Referring to fig. 2, fig. 3, and fig. 4, fig. 2 is a schematic diagram of displaying an AR special effect provided by an embodiment of the present disclosure, fig. 3 is a specific schematic diagram of a first determining module in the display device of an AR special effect provided by an embodiment of the present disclosure, and fig. 4 is a schematic diagram of displaying another AR special effect provided by an embodiment of the present disclosure. The display device includes: an acquisition module 210, a first determination module 220, a processing module 230;
the acquiring module 210 is configured to acquire a video frame image obtained by acquiring a target scene by using an AR device;
a first determining module 220, configured to determine a target AR special effect based on the video frame image, and determine environmental information of the target scene based on the video frame image;
And the processing module 230 is configured to perform processing corresponding to the environmental information on the video frame image based on the environmental information, and display the processed video frame image and the target AR special effect through the AR device.
The embodiment of the disclosure provides an aim of improving the display effect of an AR special effect fusion image and improving the shooting efficiency of the image by optimizing and improving the image based on the original effect of a video frame image.
In an alternative embodiment, the video frame image is at least one frame image in a video stream; the processing module 230 is configured to, when performing processing corresponding to the environmental information on the video frame image and displaying the processed video frame image and the target AR special effect through the AR device: and processing at least part of images in the video stream corresponding to the environment information, and displaying the processed at least part of images and the target AR special effect through the AR equipment.
In an alternative embodiment, the environmental information includes at least one of: time information, brightness information, scene information, and weather information.
In an alternative embodiment, for the case that the environmental information includes time information, the first determining module 220 is specifically configured to:
reading system time, and determining time information of the target scene based on the system time;
And/or the number of the groups of groups,
For the case that the environment information includes luminance information, the first determining module is specifically configured to:
determining brightness information of the target scene based on pixel values of all pixel points in the video frame image;
And/or the number of the groups of groups,
For the case that the environment information includes scene information, the first determining module 220 is specifically configured to:
performing scene detection processing on the video frame image by utilizing a pre-trained scene detection model to obtain scene information of the target scene;
And/or the number of the groups of groups,
For the case that the environmental information includes weather information, the first determining module is specifically configured to:
Performing weather detection processing on the video frame image by using a pre-trained weather detection model to obtain weather information of the target scene; or obtaining the geographic position information of the AR equipment; and acquiring weather information of the target scene based on the geographic position information.
In an alternative embodiment, as shown in fig. 3, the first determining module 220 includes:
A first determining unit 221, configured to determine, based on the video frame image, first pose information of the AR device in the target scene;
The second determining unit 222 is configured to determine the target AR effect from the at least one AR effect based on the first pose information and second pose information of the at least one AR effect in the target scene.
In an alternative embodiment, the first determining unit 221 is specifically configured to:
performing key point identification on the video frame image to obtain a first key point in the video frame image;
and determining a target second key point matched with the first key point from second key points of a three-dimensional model of the target scene based on the first key point, and determining first pose information of the AR equipment under the scene coordinate system based on three-dimensional coordinate values of the target second key point under the scene coordinate system.
In an alternative embodiment, the first determining unit 221 is further configured to:
And determining first pose information of the AR equipment in the scene coordinate system based on the two-dimensional coordinate value of the first key point in the image coordinate system and the three-dimensional coordinate value of the second key point of the target in the scene coordinate system.
In an alternative embodiment, as shown in fig. 4, the display device further includes:
A second determining module 240, configured to determine target environment information corresponding to the target AR special effect;
the processing module 230 is specifically configured to:
and processing the video frame image based on the environment information and the target environment information.
In an alternative embodiment, the processing module 230 is further configured to:
performing at least one of the following on the video frame image:
Brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
The processing flow of each module in the display device and the interaction flow between each module may be described with reference to the related description in the above method embodiment, which is not described in detail herein.
The embodiment of the disclosure further provides a computer device, as shown in fig. 5, which is a schematic structural diagram of the computer device provided by the embodiment of the disclosure, including:
a processor 11 and a memory 12; the memory 12 stores machine readable instructions executable by the processor 11 which, when the computer device is running, are executed by the processor to perform the steps of:
Acquiring a video frame image obtained by the AR equipment acquiring a target scene;
Determining a target AR special effect based on the video frame image, and determining environmental information of the target scene based on the video frame image;
and processing the video frame image corresponding to the environment information based on the environment information, and displaying the processed video frame image and the target AR special effect through the AR equipment.
In an alternative embodiment, in the instructions executed by the processor 11, the video frame image is at least one frame image in a video stream; processing the video frame image corresponding to the environmental information, and displaying the processed video frame image and the target AR special effect through the AR equipment, wherein the processing comprises the following steps: and processing at least part of images in the video stream corresponding to the environment information, and displaying the processed at least part of images and the target AR special effect through the AR equipment.
In an alternative embodiment, the environment information includes at least one of the following in the instructions executed by the processor 11: time information, brightness information, scene information, and weather information.
In an alternative embodiment, the processor 11 executes instructions that, for the case where the context information includes time information, the determining environmental information of the target scene based on the video frame image includes:
reading system time, and determining time information of the target scene based on the system time;
And/or the number of the groups of groups,
For the case that the environmental information includes luminance information, the determining environmental information of the target scene based on the video frame image includes:
determining brightness information of the target scene based on pixel values of all pixel points in the video frame image;
And/or the number of the groups of groups,
For the case that the environmental information includes scene information, the determining environmental information of the target scene based on the video frame image includes:
performing scene detection processing on the video frame image by utilizing a pre-trained scene detection model to obtain scene information of the target scene;
And/or the number of the groups of groups,
For the case that the environmental information includes weather information, the determining environmental information of the target scene based on the video frame image includes:
Performing weather detection processing on the video frame image by using a pre-trained weather detection model to obtain weather information of the target scene; or obtaining the geographic position information of the AR equipment; and acquiring weather information of the target scene based on the geographic position information.
In an alternative embodiment, the determining the target AR special effect based on the video frame image in the instructions executed by the processor 11 includes:
determining first pose information of the AR equipment in the target scene based on the video frame image;
the target AR effect is determined from the at least one AR effect based on the first pose information and second pose information of the at least one AR effect in the target scene.
In an alternative embodiment, in the instructions executed by the processor 11, the determining, based on the video frame image, first pose information of the AR device in the target scene includes:
performing key point identification on the video frame image to obtain a first key point in the video frame image;
and determining a target second key point matched with the first key point from second key points of a three-dimensional model of the target scene based on the first key point, and determining first pose information of the AR equipment under the scene coordinate system based on three-dimensional coordinate values of the target second key point under the scene coordinate system.
In an alternative embodiment, in the instructions executed by the processor 11, the determining the first pose information of the AR device in the scene coordinate system based on the three-dimensional coordinate value of the second key point of the target in the scene coordinate system includes:
And determining first pose information of the AR equipment in the scene coordinate system based on the two-dimensional coordinate value of the first key point in the image coordinate system and the three-dimensional coordinate value of the second key point of the target in the scene coordinate system.
In an alternative embodiment, the instructions executed by the processor 11 further include: determining target environment information corresponding to the target AR special effect; the processing of the video frame image corresponding to the environment information based on the environment information includes: and processing the video frame image based on the environment information and the target environment information.
In an alternative embodiment, the processing corresponding to the environmental information is performed on the video frame image, including: performing at least one of the following on the video frame image: brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
The disclosed embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the AR special effect display method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program product of the method for displaying an AR special effect provided in the embodiments of the present disclosure includes a computer readable storage medium storing program codes, where the program codes include instructions for executing the steps of the method for displaying an AR special effect described in the above method embodiments, and the specific reference may be made to the above method embodiments, which are not repeated herein.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (9)

1. A method for displaying AR special effects, the method comprising:
Acquiring a video frame image obtained by the AR equipment acquiring a target scene;
Determining a target AR special effect based on the video frame image, and determining environmental information of the target scene based on the video frame image; the environmental information includes at least one of: time information, brightness information, scene information, and weather information;
Based on the environment information, processing the video frame image corresponding to the environment information, and displaying the processed video frame image and the target AR special effect through the AR equipment;
Further comprises: determining target environment information corresponding to the target AR special effect;
The processing of the video frame image corresponding to the environment information based on the environment information includes:
Matching the environment information determined from the video frame image with the target environment information; under the condition that the two are inconsistent, the video frame image is adjusted, so that the environment information of the adjusted video frame image can be consistent or close to the target environment information;
the processing the video frame image corresponding to the environment information includes:
performing at least one of the following on the video frame image:
Brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
2. The display method according to claim 1, wherein the video frame image is at least one frame image in a video stream;
Processing the video frame image corresponding to the environmental information, and displaying the processed video frame image and the target AR special effect through the AR equipment, wherein the processing comprises the following steps:
and processing at least part of images in the video stream corresponding to the environment information, and displaying the processed at least part of images and the target AR special effect through the AR equipment.
3. The display method according to claim 1, wherein for a case where the environment information includes time information, the determining the environment information of the target scene based on the video frame image includes:
reading system time, and determining time information of the target scene based on the system time; and/or the number of the groups of groups,
For the case that the environmental information includes luminance information, the determining environmental information of the target scene based on the video frame image includes:
determining brightness information of the target scene based on pixel values of all pixel points in the video frame image;
And/or the number of the groups of groups,
For the case that the environmental information includes scene information, the determining environmental information of the target scene based on the video frame image includes:
performing scene detection processing on the video frame image by utilizing a pre-trained scene detection model to obtain scene information of the target scene;
And/or the number of the groups of groups,
For the case that the environmental information includes weather information, the determining environmental information of the target scene based on the video frame image includes:
Performing weather detection processing on the video frame image by using a pre-trained weather detection model to obtain weather information of the target scene; or obtaining the geographic position information of the AR equipment; and acquiring weather information of the target scene based on the geographic position information.
4. A display method according to any one of claims 1-3, wherein said determining a target AR effect based on said video frame image comprises:
determining first pose information of the AR equipment in the target scene based on the video frame image;
the target AR effect is determined from the at least one AR effect based on the first pose information and second pose information of the at least one AR effect in the target scene.
5. The method of displaying of claim 4, wherein determining the first pose information of the AR device in the target scene based on the video frame image comprises:
performing key point identification on the video frame image to obtain a first key point in the video frame image;
And determining a target second key point matched with the first key point from second key points of the three-dimensional model of the target scene based on the first key point, and determining first pose information of the AR equipment under the scene coordinate system based on three-dimensional coordinate values of the target second key point under the scene coordinate system.
6. The display method according to claim 5, wherein the determining the first pose information of the AR device in the scene coordinate system based on the three-dimensional coordinate values of the target second key point in the scene coordinate system includes:
And determining first pose information of the AR equipment in the scene coordinate system based on the two-dimensional coordinate value of the first key point in the image coordinate system and the three-dimensional coordinate value of the second key point of the target in the scene coordinate system.
7. An AR special effect display device, characterized in that the display device comprises:
The acquisition module is used for acquiring a video frame image obtained by the AR equipment for acquiring a target scene;
The first determining module is used for determining a target AR special effect based on the video frame image and determining environment information of the target scene based on the video frame image; the environmental information includes at least one of: time information, brightness information, scene information, and weather information;
The processing module is used for processing the video frame image corresponding to the environment information based on the environment information and displaying the processed video frame image and the target AR special effect through the AR equipment;
Further comprises: a second determining module, configured to determine target environment information corresponding to the target AR special effect
The processing module is specifically configured to: matching the environment information determined from the video frame image with the target environment information; under the condition that the two are inconsistent, the video frame image is adjusted, so that the environment information of the adjusted video frame image can be consistent or close to the target environment information;
The processing module is used for processing the video frame image corresponding to the environment information, and is used for:
performing at least one of the following on the video frame image:
Brightness adjustment processing, saturation adjustment processing, hue adjustment processing, and sharpness adjustment processing.
8. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor for executing the machine-readable instructions stored in the memory, which when executed by the processor, perform the steps of the AR special effect display method according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a computer device, performs the steps of the AR special effect display method according to any one of claims 1 to 6.
CN202110203047.8A 2021-02-23 2021-02-23 AR special effect display method and device, computer equipment and storage medium Active CN112884909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110203047.8A CN112884909B (en) 2021-02-23 2021-02-23 AR special effect display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110203047.8A CN112884909B (en) 2021-02-23 2021-02-23 AR special effect display method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112884909A CN112884909A (en) 2021-06-01
CN112884909B true CN112884909B (en) 2024-09-13

Family

ID=76053924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110203047.8A Active CN112884909B (en) 2021-02-23 2021-02-23 AR special effect display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112884909B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761114B (en) * 2022-10-28 2024-04-30 如你所视(北京)科技有限公司 Video generation method, device and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089038A (en) * 2018-08-06 2018-12-25 百度在线网络技术(北京)有限公司 Augmented reality image pickup method, device, electronic equipment and storage medium
CN111757082A (en) * 2020-06-17 2020-10-09 深圳增强现实技术有限公司 Image processing method and system applied to AR intelligent device
CN112288882A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Information display method and device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764909B (en) * 2008-12-08 2012-11-14 新奥特(北京)视频技术有限公司 Method for determining key values of pixels in image
JP5371501B2 (en) * 2009-03-17 2013-12-18 オリンパスイメージング株式会社 Image processing apparatus, image processing method, and image processing program
CN107993191B (en) * 2017-11-30 2023-03-21 腾讯科技(深圳)有限公司 Image processing method and device
US10573067B1 (en) * 2018-08-22 2020-02-25 Sony Corporation Digital 3D model rendering based on actual lighting conditions in a real environment
US11164367B2 (en) * 2019-07-17 2021-11-02 Google Llc Illumination effects from luminous inserted content
CN111127624A (en) * 2019-12-27 2020-05-08 珠海金山网络游戏科技有限公司 Illumination rendering method and device based on AR scene
CN112148125A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 AR interaction state control method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089038A (en) * 2018-08-06 2018-12-25 百度在线网络技术(北京)有限公司 Augmented reality image pickup method, device, electronic equipment and storage medium
CN111757082A (en) * 2020-06-17 2020-10-09 深圳增强现实技术有限公司 Image processing method and system applied to AR intelligent device
CN112288882A (en) * 2020-10-30 2021-01-29 北京市商汤科技开发有限公司 Information display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112884909A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN111586360B (en) Unmanned aerial vehicle projection method, device, equipment and storage medium
JP6525467B1 (en) Efficient determination of optical flow between images
US8633970B1 (en) Augmented reality with earth data
WO2016114930A2 (en) Systems and methods for augmented reality art creation
CN106650723A (en) Method for determining the pose of a camera and for recognizing an object of a real environment
CN113220251B (en) Object display method, device, electronic equipment and storage medium
CN112200035B (en) Image acquisition method, device and vision processing method for simulating crowded scene
CN109361880A (en) A kind of method and system showing the corresponding dynamic picture of static images or video
CN112419472A (en) Augmented reality real-time shadow generation method based on virtual shadow map
CN103500452A (en) Scenic spot scenery moving augmented reality method based on space relationship and image analysis
CN113256781A (en) Rendering device and rendering device of virtual scene, storage medium and electronic equipment
US20100066732A1 (en) Image View Synthesis Using a Three-Dimensional Reference Model
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN112308977B (en) Video processing method, video processing device, and storage medium
JP7125963B2 (en) Information processing program, information processing apparatus, and information processing method
CN111598824A (en) Scene image processing method and device, AR device and storage medium
CN112288878A (en) Augmented reality preview method and preview device, electronic device and storage medium
CN112243518A (en) Method and device for acquiring depth map and computer storage medium
CN112884909B (en) AR special effect display method and device, computer equipment and storage medium
CN112950711B (en) Object control method and device, electronic equipment and storage medium
CN109785429B (en) Three-dimensional reconstruction method and device
CN113298177B (en) Night image coloring method, device, medium and equipment
CN114882106A (en) Pose determination method and device, equipment and medium
CN118196135A (en) Image processing method, apparatus, storage medium, device, and program product
CN108564654B (en) Picture entering mode of three-dimensional large scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant