CN110175579A - Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium - Google Patents

Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium Download PDF

Info

Publication number
CN110175579A
CN110175579A CN201910458229.2A CN201910458229A CN110175579A CN 110175579 A CN110175579 A CN 110175579A CN 201910458229 A CN201910458229 A CN 201910458229A CN 110175579 A CN110175579 A CN 110175579A
Authority
CN
China
Prior art keywords
capture apparatus
images
image
tracing point
continuously shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910458229.2A
Other languages
Chinese (zh)
Inventor
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910458229.2A priority Critical patent/CN110175579A/en
Publication of CN110175579A publication Critical patent/CN110175579A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of attitude determination method of capture apparatus, the methods of exhibiting of scene image, device, equipment and storage mediums.The attitude determination method of the capture apparatus includes: acquisition images to be recognized;Images to be recognized is inputted into gesture recognition model, when obtaining shooting images to be recognized, the attitude data of capture apparatus;Wherein, gesture recognition model is trained and is obtained by using the attitude data for being continuously shot the corresponding location drawing picture of tracing point and be respectively continuously shot tracing point of capture apparatus.The technical solution of the embodiment of the present invention, it is accurate, quickly and the starting viewing angle in place interested needed for easily determining user to realize, and can provide effective image browsing using the corresponding image of starting viewing angle as starting point for user.

Description

Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium
Technical field
The present embodiments relate to image recognition technology more particularly to a kind of attitude determination methods of capture apparatus, scene Methods of exhibiting, device, equipment and the storage medium of image.
Background technique
With calculate equipment computing capability, show equipment display function and image processing techniques continuous development, Nowadays, user can check the three-dimensional scence in place interested, and can be to place interested in a mobile device whenever and wherever possible And its surrounding objects carry out three-dimensional scence browsing.
In the prior art, usually user is by site attributes information such as the titles in input place interested, with realization pair Place interested and its adjacent objects carry out three-dimensional scence browsing, and then reach scenic spot roaming, appointment meeting and scene and reproduce The purpose of.
In the implementation of the present invention, the discovery prior art has following defects that user can not be with interested to inventor The specific viewing angle in place carries out scene browsing as starting point, to point with interest, it is difficult to meet the different scene of user Browsing demand.
Summary of the invention
The embodiment of the invention provides a kind of attitude determination method of capture apparatus, the methods of exhibiting of scene image, device, Equipment and storage medium, the posture number of the corresponding capture apparatus of scene browsing start image needed for determining user so that realization is accurate According to, and then according to the attitude data, it obtains scene and browses start image.
In a first aspect, the embodiment of the invention provides a kind of attitude determination methods of capture apparatus, comprising:
Obtain images to be recognized;
The images to be recognized is inputted into gesture recognition model, when obtaining shooting the images to be recognized, capture apparatus Attitude data;
Wherein, the gesture recognition model is continuously shot the corresponding position of tracing point by use the capture apparatus Image and each attitude data for being continuously shot tracing point, are trained and obtain.
Second aspect, the embodiment of the invention also provides a kind of methods of exhibiting of scene image, comprising:
Images to be recognized is obtained, and obtains three-dimensional scene images corresponding with the images to be recognized;
The images to be recognized is inputted into gesture recognition model, when obtaining shooting the images to be recognized, capture apparatus Attitude data;
In the three-dimensional scene images, the corresponding scene image of attitude data of the capture apparatus is obtained;
Using the scene image as starting point, three dimensional navigation image is exported;
Wherein, the gesture recognition model is continuously shot the corresponding position of tracing point by use the capture apparatus Image and each attitude data for being continuously shot tracing point, are trained and obtain.
The third aspect, the embodiment of the invention also provides a kind of posture determining devices of capture apparatus, comprising:
Image collection module, for obtaining images to be recognized;
Attitude data obtains module, for the images to be recognized to be inputted gesture recognition model, obtain shooting it is described to When identifying image, the attitude data of capture apparatus;
Wherein, the gesture recognition model is continuously shot the corresponding position of tracing point by use the capture apparatus Image and each attitude data for being continuously shot tracing point, are trained and obtain.
Fourth aspect, the embodiment of the invention also provides a kind of displaying devices of scene image, comprising:
Three-dimensional scene images obtain module, for obtaining images to be recognized, and obtain corresponding with the images to be recognized Three-dimensional scene images;
Equipment attitude data obtains module, for the images to be recognized to be inputted gesture recognition model, obtains shooting institute When stating images to be recognized, the attitude data of capture apparatus;
Scene image obtains module, for obtaining the attitude data of the capture apparatus in the three-dimensional scene images Corresponding scene image;
Picture browsing module, for exporting three dimensional navigation image using the scene image as starting point;
Wherein, the gesture recognition model is continuously shot the corresponding position of tracing point by use the capture apparatus Image and each attitude data for being continuously shot tracing point, are trained and obtain.
5th aspect, the embodiment of the invention also provides a kind of equipment, the equipment includes:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing Device realizes the methods of exhibiting of the attitude determination method or scene image as described in any embodiment of that present invention.
6th aspect, it is described the embodiment of the invention also provides a kind of storage medium comprising computer executable instructions Computer executable instructions are true for executing posture as described in any embodiment of that present invention when being executed as computer processor Determine the methods of exhibiting of method or scene image.
The embodiment of the invention provides a kind of attitude determination method of capture apparatus, the methods of exhibiting of scene image, device, Equipment and storage medium are continuously shot the corresponding location drawing picture of tracing point and each continuous bat by will use capture apparatus The attitude data for taking the photograph tracing point, the gesture recognition model being trained, shooting is set when shooting images to be recognized for determining Standby attitude data, and according to the attitude data, image browsing corresponding with images to be recognized is provided, is solved in the prior art Scene needed for accurately can not determining user browses the technological deficiency of the attitude data of the corresponding capture apparatus of start image, and Can not be using the specific viewing angle in user place interested as starting point, the technology for carrying out scene browsing to point with interest lacks It falls into, it is accurate, quickly and the starting viewing angle in place interested needed for easily determining user to realize, and can be with this Originating the corresponding image of viewing angle is starting point, provides effective image browsing for user.
Detailed description of the invention
Fig. 1 is the flow chart of the attitude determination method for the capture apparatus that the embodiment of the present invention one provides;
Fig. 2 is the flow chart of the methods of exhibiting of scene image provided by Embodiment 2 of the present invention;
Fig. 3 is the structure chart of the posture determining device for the capture apparatus that the embodiment of the present invention three provides;
Fig. 4 is the structure chart of the displaying device for the scene image that the embodiment of the present invention four provides;
Fig. 5 is a kind of structure chart for equipment that the embodiment of the present invention five provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart of the attitude determination method for capture apparatus that the embodiment of the present invention one provides, the present embodiment User is applicable to when point carries out picture browsing with interest, determines the posture number of the corresponding capture apparatus of initial image browsing According to the case where, this method can be executed by the posture determining device of capture apparatus, which can pass through software and/or hardware It realizes, which can be integrated in the equipment such as server.As shown in Figure 1, this method specifically comprises the following steps:
S110, images to be recognized is obtained.
In the present embodiment, images to be recognized specifically can be user's input, be also possible to the letter inputted according to user Breath, by searching for acquisition automatically, can also be obtained from the link that user inputs etc., the present embodiment to this without Limitation.
Further, in the step 120 of the present embodiment, images to be recognized can be as the input figure of gesture recognition model Picture, so by gesture recognition model obtain shoot the images to be recognized when, the attitude data of capture apparatus is therefore, to be identified Image should be the image that gesture recognition model can be identified correctly.
S120, images to be recognized is inputted into gesture recognition model, when obtaining shooting images to be recognized, the posture of capture apparatus Data, wherein gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use capture apparatus, and each It is continuously shot the attitude data of tracing point, is trained and obtains.
In the present embodiment, gesture recognition model is specifically referred to using image as input data, and output data is that shooting is defeated When entering image, the attitude data of capture apparatus.Wherein, attitude data is specifically referred to for uniquely limiting capture apparatus two-dimentional empty Between, the data of three-dimensional space or the shooting angle in hyperspace.Attitude data typically can be in three-dimensional space six from By degree evidence, or position corresponding with six degree of freedom data spin matrix and position translation matrix.
Further, in the present embodiment, gesture recognition model is by using the tracing point that is continuously shot of capture apparatus to distinguish Corresponding location drawing picture, and it is respectively continuously shot the attitude data of tracing point, it is trained and obtains.Illustratively, posture is known The training process of other model, may include following four step:
Step 1: obtaining the attitude data for being continuously shot tracing point of the setting quantity of capture apparatus, wherein attitude data Including at least the position spin matrix and position translation matrix for being respectively continuously shot tracing point.
It is continuously shot tracing point specifically and can be the capture apparatus only continuous rotation in any one usable rotational direction, at other When keeping deflection constant in usable rotational direction, generated tracing point.Further, " continuous in any one usable rotational direction Rotation " can specifically refer to capture apparatus in " any usable rotational direction ", using minimum rotation angle as interval continuous rotation.
Illustratively, if capture apparatus can be rotated on tri- directions A, B and C, it is continuously shot track Point can be capture apparatus and keep deflection constant on the direction B and C, continuous as interval using minimum rotation angle on the direction A Obtained tracing point (hereinafter referred to as the first tracing point) is rotated, can also be that capture apparatus keeps deflection on the direction A and C Tracing point (hereinafter referred to as second of tracing point) that is constant, being obtained in directionb using minimum rotation angle as interval continuous rotation, It can also be that capture apparatus keeps deflection constant on the direction A and B, it is continuous as interval using minimum rotation angle on the direction C Rotate obtained tracing point (hereinafter referred to as the third tracing point).Certainly, be continuously shot tracing point be also possible to it is above-mentioned the first Any two kinds of tracing points or all three tracing point in tracing point, second of tracing point and the third tracing point.
In the present embodiment, the tracing point quantity that is continuously shot of acquired capture apparatus is setting quantity, the setting quantity Size can be chosen according to actual demand.Illustratively, higher to the required precision of the output data of gesture recognition model, that Set quantity Ying Yue great.Certainly, setting quantity is bigger, and calculation amount when being trained to model also can be bigger, therefore, can be with Model accuracy, training calculation amount etc. are comprehensively considered, to determine " setting quantity ".
In the present embodiment, the attitude data of capture apparatus includes at least the position spin matrix for being respectively continuously shot tracing point And position translation matrix.It is understood that can be used when capture apparatus is shot in three-dimensional space or hyperspace One position spin matrix and a position translation matrix uniquely to limit a shooting angle of capture apparatus.Therefore, exist In the present embodiment, acquired attitude data includes at least the position spin matrix and position translation square for being respectively continuously shot tracing point Battle array.Wherein, a shooting tracing point is corresponding with a position spin matrix and a position translation matrix, different shooting tracks The position spin matrix of point will not be identical with position translation matrix.
Step 2: obtaining the position for being respectively continuously shot tracing point according to the position spin matrix for being respectively continuously shot tracing point Quaternary number.
It is appreciated that shooting the position spin matrix of tracing point when capture apparatus is located at and is shot in three-dimensional space It can mutually be converted with position quaternary number.In view of the expression form of position quaternary number, compared with position spin matrix expression form more Add succinctly, it therefore, in the present embodiment, will while shooting space locating for capture apparatus is embodied as three-dimensional space Position spin matrix is converted to position quaternary number.Model is trained using position quaternary number, when can make model training Calculation amount is less, and calculation is more succinct.
Further, correspond to three-dimensional space, the matrix that position spin matrix is one 3 × 3, position translation matrix is one A 3 × 1 matrix.
Illustratively, the statement formula of the position quaternary number in three-dimensional space is q=q0+q1×i+q2×j+q3× k, In, q0、q1、q2And q3For position quaternary number, i, j and k are three rotating vectors, known to i, j and k.Position rotation in three-dimensional space The matrix R, R that torque battle array is one 3 × 3 specifically:
Q in above-mentioned matrix R0、q1、q2And q3As position quaternary number, it can be seen that, position spin matrix and position four It can mutually be converted between first number.
Step 3: acquisition is respectively continuously shot the corresponding location drawing picture of tracing point in Same Scene.
In the present embodiment, location drawing picture specifically refers to capture apparatus, to be continuously shot any tracing point in tracing point The corresponding shooting posture of attitude data is shot, the image got.Each shooting tracing point is corresponding with a width location drawing Picture, and location drawing picture corresponding to different shooting tracing points is different.
Further, it need to shoot to obtain in Same Scene that each group, which is continuously shot location drawing picture corresponding to tracing point, 's.Difference group is continuously shot location drawing picture corresponding to tracing point either shooting obtains in same scene, can also To be to shoot to obtain in different scenes.Further, in order to improve gesture recognition model output data accuracy, one As the attitude data for being continuously shot tracing point of multiple groups setting quantity can be obtained in above-mentioned " first step ", and different groups is continuous Shooting tracing point corresponding " setting quantity " both may be the same or different.
Specifically, chosen position image can be carried out according to the application scenarios of gesture recognition model corresponding " scene ".Such as Fruit gesture recognition model is only used for carrying out shooting to the image in a special scenes (such as Nine-Dragon Wall in Beihai park) setting The identification of standby attitude data, then, the location drawing picture corresponding to tracing point that is continuously shot of difference group should be " special at this Determine scene " in shooting obtain;If gesture recognition model is for shooting the image in a kind of scene (such as park) The identification of the attitude data of equipment, then, the location drawing picture corresponding to tracing point that is continuously shot of difference group should be different Shooting obtains in park;If gesture recognition model is for clapping the image in multiclass scene (such as park and street) The identification of the attitude data of equipment is taken the photograph, then, the location drawing picture corresponding to tracing point that is continuously shot of difference group should be in difference Park and different streets in shooting obtain.
Further, since gesture recognition model is trained to model, therefore by data such as location drawing pictures Gesture recognition model can only accurately identify image identical with the picture material classification of location drawing picture.Therefore, at this In embodiment, the picture material classification of images to be recognized and location drawing picture should be identical.Wherein, picture material classification specifically may be used To be streetscape class, gardens class, Desert and valley class etc..
Step 4: being trained using position quaternary number, position translation matrix and location drawing picture, gesture recognition mould is obtained Type.
In the present embodiment, training data is specially position quaternary number, the position translation square that each group is continuously shot tracing point Battle array and corresponding location drawing picture.When being trained to model, input data is piece image, and output data is corresponding for input data Capture apparatus attitude data, i.e., position quaternary number and position translation matrix composition 7 dimensional vectors.The training method of model has Body can be using each position image as input, at the same by by being respectively continuously shot tracing point position quaternary number and position put down Each 7 dimensional vector of matrix composition is moved respectively as label, model is trained using gradient descent method, obtains gesture recognition mould Type, specifically:
The corresponding location drawing picture of each tracing point that each group is continuously shot in tracing point is sequentially input into model, it is every in model It, can be according to the corresponding position of location drawing picture currently entered after the attitude data for exporting the corresponding capture apparatus of a width location drawing picture Quaternary number and position translation matrix are set, the loss of location drawing picture currently entered is calculated using loss function, then will calculate and tie Fruit feeds back to the model that current training obtains, and then adjusts the parameter for the model that current training obtains, and finally obtains gesture recognition Model.
It is possible to further use all position quaternary numbers, position acquired in above-mentioned " first step " to " third step " flat Matrix and location drawing picture are moved, model progress is repeatedly independently trained, is selected in each model that training obtains, the sum of loss minimum Model as final gesture recognition model.Wherein, the sum of loss specifically can be in the training process of model, calculate The sum of the loss of each width location drawing picture arrived.
Illustratively, loss function specifically can beWherein, t is model training mistake Cheng Zhong, to the position translation matrix that the location drawing picture of input is calculated,For the corresponding shooting of location drawing picture of the input The position translation matrix of tracing point, q be according to model training during, position that the location drawing picture of input is calculated Quaternary number, and use formula q=q0+q1×i+q2×j+q3It is that × k is calculated as a result,According to the input location drawing picture The position quaternary number of corresponding shooting tracing point, and use formula q=q0+q1×i+q2×j+q3The result that × k is calculated.
In the present embodiment, the model being trained to specifically can be Logic Regression Models etc..Illustratively, the mould being trained to Type can be GoogleNet this deep learning structure is improved after obtained model.Specifically, it can incite somebody to action The softactivation layer being connected in GoogleNet with softmax0 layers of sequence and FC layers of deletion, with softmax1 layers of sequence Connected softactivation layer and FC layers of deletion, the softactivation layer being connected with softmax2 are deleted, meanwhile, make Softmax0 layers, softmax1 layers and softmax2 layers are replaced respectively with three 7 dimensional vectors.
The embodiment of the invention provides a kind of attitude determination methods of capture apparatus, by that will use the continuous of capture apparatus The corresponding location drawing picture of shooting tracing point and the attitude data for being respectively continuously shot tracing point, the posture being trained are known Other model, the attitude data of capture apparatus when shooting images to be recognized for determining, solving in the prior art can not accurately really Scene needed for determining user browses the technological deficiency of the attitude data of the corresponding capture apparatus of start image, realizes accurate, fast Fast and place interested needed for easily determining user starting viewing angle.
Embodiment two
Fig. 2 is a kind of flow chart of the methods of exhibiting of scene image provided by Embodiment 2 of the present invention, and the present embodiment can fit When for user to point progress picture browsing with interest, the attitude data of the corresponding capture apparatus of initial image browsing is determined, And using the corresponding image of the attitude data as starting point, the case where carrying out the picture browsing in place interested, this method can be by field The displaying device of scape image executes, which can be by software and or hardware realization, which can be integrated in server etc. In equipment.As shown in Figure 1, this method specifically comprises the following steps:
S210, images to be recognized is obtained, and obtains three-dimensional scene images corresponding with images to be recognized.
In the present embodiment, all the same with term identical or corresponding in other each embodiments explanation, the present embodiment is no longer It repeats.
In the present embodiment, three-dimensional scene images corresponding with images to be recognized specifically can be including in images to be recognized The three-dimensional scenic of all scenery.Specifically, it if including the Hall of Praying for Good Harvest in the Temple of Heaven in images to be recognized, is obtained The three-dimensional scene images taken can be the three-dimensional scenic with the Hall of Praying for Good Harvest for main scenery.
S220, images to be recognized is inputted into gesture recognition model, when obtaining shooting images to be recognized, the posture of capture apparatus Data, wherein gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use capture apparatus, and each It is continuously shot the attitude data of tracing point, is trained and obtains.
S230, in three-dimensional scene images, obtain the corresponding scene image of attitude data of capture apparatus.
It is appreciated that three-dimensional scene seems one that each group photo shot by capture apparatus with 360 degree is spliced Panoramic picture.Therefore, shooting angle can be found from three-dimensional scene images, shooting corresponding with the attitude data of capture apparatus The same or similar photo of angle, the photo are the corresponding scene of attitude data of capture apparatus acquired in this step 230 Image.
Specifically, corresponding with the attitude data of capture apparatus if shooting angle is not present in three-dimensional scene images The identical photo of shooting angle, can determine rule according to close shooting angle, search and the attitude data pair of capture apparatus Photo similar in the shooting angle answered.
Illustratively, if attitude data includes at least the position spin matrix and position translation for being respectively continuously shot tracing point Matrix, then close shooting angle determines that rule may is that
Firstly, searching the position in corresponding position translation matrix, with the attitude data of capture apparatus in three-dimensional scene images The identical each image of translation matrix is set, as image to be screened;
Then, from each image to be screened, position corresponding with the position spin matrix in the attitude data of capture apparatus Set attitude data corresponding scene of the smallest image to be screened of quaternary number difference as capture apparatus between quaternary number Image.Wherein, quaternary number difference is equal to (q0-q'0)+(q1-q'1)+(q2-q'2)+(q3-q'3), wherein q0,q1,q2,q3To clap Take the photograph the corresponding position quaternary number of position spin matrix in the attitude data of equipment, q'0,q'1,q'2,q'3For image pair to be screened The position quaternary number answered.
S240, using scene image as starting point, export three dimensional navigation image.
It in the present embodiment, i.e., can be using the scene image as starting point, based in step 210 after obtaining scene image The three-dimensional scene images of acquisition export the three dimensional navigation image of any direction.Specifically, the direction of view of three dimensional navigation image It can be and randomly select, be also possible to fixed etc..
The embodiment of the invention provides a kind of methods of exhibiting of scene image, by that will use being continuously shot for capture apparatus The corresponding location drawing picture of tracing point and the attitude data for being respectively continuously shot tracing point, the gesture recognition mould being trained Type, the attitude data of capture apparatus when shooting images to be recognized for determining, and according to the attitude data, it provides and figure to be identified As corresponding image browsing, solving in the prior art can not be using the specific viewing angle in user place interested as starting Point carries out the technological deficiency of scene browsing to point with interest, realizes accurate, quick and easily determines needed for user The starting viewing angle in place interested, and using the corresponding image of starting viewing angle as starting point, it is provided effectively for user Image browsing.
On the basis of the various embodiments described above, be embodied as, using scene image as starting point, output three dimensional navigation image it Before, further includes: obtain the picture browsing direction of user's input;
Correspondingly, three dimensional navigation image will be exported, will be embodied as using scene image as starting point: using scene image as starting point, And according to picture browsing direction, three dimensional navigation image is exported.
In the present embodiment, the direction of view of three dimensional navigation image is not any or fixed, input by user. User can be according to itself hobby or browsing demand, input picture direction of view, to realize using scene image as starting point, according to figure As direction of view, three dimensional navigation image is exported.Wherein, picture browsing direction specifically can be the X-axis in three-dimensional space just To browsing, the negative sense browsing etc. in the Y in three-dimensional space can also be.
The benefit being arranged in this way is: improving the matching between the picture material and user's browsing demand of three dimensional navigation image Degree, preferably meets the actual demand of user.
Embodiment three
Fig. 3 is a kind of structure chart of the posture determining device for capture apparatus that the embodiment of the present invention three provides, the present embodiment On the basis of the various embodiments described above, the embodiment of " attitude determination method of capture apparatus " is provided.With above-described embodiment Identical or corresponding term explains that this embodiment is not repeated.
As shown in figure 3, described device includes: that image collection module 301 and attitude data obtain module 302, in which:
Image collection module 301, for obtaining images to be recognized;
Attitude data obtains module 302, for images to be recognized to be inputted gesture recognition model, obtains shooting figure to be identified When picture, the attitude data of capture apparatus;
Wherein, gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use capture apparatus, with And it is respectively continuously shot the attitude data of tracing point, it is trained and obtains.
The embodiment of the invention provides a kind of posture determining device of capture apparatus, which passes through image first and obtains mould Block 301 obtains images to be recognized, then obtains module 302 by attitude data and images to be recognized is inputted gesture recognition model, When obtaining shooting images to be recognized, the attitude data of capture apparatus, wherein gesture recognition model is by using the continuous of capture apparatus The corresponding location drawing picture of tracing point is shot, and is respectively continuously shot the attitude data of tracing point, is trained and obtains.
Which solves the corresponding bats of the browsing start image of scene needed for accurately can not determining user in the prior art The technological deficiency for taking the photograph the attitude data of equipment, realize it is accurate, quickly and easily determine user needed for place interested Starting viewing angle.
On the basis of the various embodiments described above, the training process of gesture recognition model may include:
Obtain the attitude data for being continuously shot tracing point of the setting quantity of capture apparatus, wherein attitude data at least wraps Include the position spin matrix and position translation matrix for being respectively continuously shot tracing point;
According to the position spin matrix for being respectively continuously shot tracing point, the position quaternary number for being respectively continuously shot tracing point is obtained;
In Same Scene, acquisition is respectively continuously shot the corresponding location drawing picture of tracing point;
It using position quaternary number, position translation matrix and location drawing picture, is trained, obtains gesture recognition model.
On the basis of the various embodiments described above, using position quaternary number, position translation matrix and location drawing picture, it is trained, Gesture recognition model is obtained, is specifically as follows:
Using each position image as input, at the same by by being respectively continuously shot tracing point position quaternary number and position put down Each 7 dimensional vector of matrix composition is moved respectively as label, is trained using gradient descent method, obtains gesture recognition model.
On the basis of the various embodiments described above, images to be recognized can be identical with the picture material classification of location drawing picture.
On the basis of the various embodiments described above, picture material classification at least may include:
Streetscape class, gardens class, Desert and valley class.
On the basis of the various embodiments described above, the gesture recognition model is continuously shot track by use the capture apparatus The corresponding location drawing picture of point and each attitude data for being continuously shot tracing point, are trained and obtain, specifically may be used With are as follows:
The gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use the capture apparatus, And each attitude data for being continuously shot tracing point, Logic Regression Models are trained as Logic Regression Models.
The posture determining device of capture apparatus provided by the embodiment of the present invention can be performed any embodiment of that present invention and be mentioned The attitude determination method of the capture apparatus of confession has the corresponding functional module of execution method and beneficial effect.Not in the present embodiment In detailed description technical detail, reference can be made to any embodiment of that present invention provide capture apparatus attitude determination method.
Example IV
Fig. 4 is a kind of structure chart of the displaying device for scene image that the embodiment of the present invention four provides, and the present embodiment is upper On the basis of stating each embodiment, the embodiment of " methods of exhibiting of scene image " is provided.Same as the previously described embodiments or phase The term answered explains that this embodiment is not repeated.
As shown in figure 4, described device includes: that three-dimensional scene images obtain module 401, equipment attitude data obtains module 402, scene image obtains module 403 and picture browsing module 404, in which:
Three-dimensional scene images obtain module 401, for obtaining images to be recognized, and obtain corresponding with images to be recognized three Tie up scene image;
Equipment attitude data obtains module 402, for images to be recognized to be inputted gesture recognition model, obtains shooting wait know When other image, the attitude data of capture apparatus;
Scene image obtains module 403, in three-dimensional scene images, the attitude data for obtaining capture apparatus to be corresponding Scene image;
Picture browsing module 404, for exporting three dimensional navigation image using scene image as starting point;
Wherein, gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use capture apparatus, with And it is respectively continuously shot the attitude data of tracing point, it is trained and obtains.
The embodiment of the invention provides a kind of displaying device of scene image, which is obtained by three-dimensional scene images first Modulus block 401 obtains images to be recognized, and obtains three-dimensional scene images corresponding with images to be recognized, then passes through equipment posture Images to be recognized is inputted gesture recognition model by data acquisition module 402, when obtaining shooting images to be recognized, the appearance of capture apparatus State data, then module 403 is obtained in three-dimensional scene images by scene image, the attitude data for obtaining capture apparatus is corresponding Scene image exports three dimensional navigation image finally by picture browsing module 404 using scene image as starting point, wherein posture is known Other model is continuously shot the corresponding location drawing picture of tracing point by use capture apparatus, and is respectively continuously shot tracing point Attitude data is trained and obtains.
Which solves in the prior art can not be right using the specific viewing angle in user place interested as starting point With interest point carry out scene browsing technological deficiency, realize it is accurate, quickly and easily determine user needed for sense it is emerging The starting viewing angle in interesting place, and using the corresponding image of starting viewing angle as starting point, effective browsing is provided for user Image.
On the basis of the various embodiments described above, can also include:
Direction of view obtains module, for before exporting three dimensional navigation image, obtaining user using scene image as starting point The picture browsing direction of input;
Correspondingly, picture browsing module 404 specifically can be used for:
Using scene image as starting point, and according to picture browsing direction, three dimensional navigation image is exported.
The displaying device of scene image provided by the embodiment of the present invention can be performed provided by any embodiment of the invention The methods of exhibiting of scene image has the corresponding functional module of execution method and beneficial effect.It is not detailed in the present embodiment to retouch The technical detail stated, reference can be made to the methods of exhibiting for the scene image that any embodiment of that present invention provides.
Embodiment five
Fig. 5 is a kind of structural schematic diagram for equipment that the embodiment of the present invention five provides.Fig. 5, which is shown, to be suitable for being used to realizing this The block diagram of the example devices 12 of invention embodiment.The equipment 12 that Fig. 5 is shown is only an example, should not be to of the invention real The function and use scope for applying example bring any restrictions.
As shown in figure 5, equipment 12 is showed in the form of universal computing device.The component of equipment 12 may include but unlimited In one or more processor or processing unit 16, system storage 28, connecting different system components, (including system is deposited Reservoir 28 and processing unit 16) bus 18.
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC) Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Equipment 12 typically comprises a variety of computer system readable media.These media can be it is any can be by equipment 12 The usable medium of access, including volatile and non-volatile media, moveable and immovable medium.
System storage 28 may include the computer system readable media of form of volatile memory, such as arbitrary access Memory (RAM) 30 and/or cache memory 32.Equipment 12 may further include it is other it is removable/nonremovable, Volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing irremovable , non-volatile magnetic media (Fig. 5 do not show, commonly referred to as " hard disk drive ").Although being not shown in Fig. 5, use can be provided In the disc driver read and write to removable non-volatile magnetic disk (such as " floppy disk "), and to removable anonvolatile optical disk The CD drive of (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driver can To be connected by one or more data media interfaces with bus 18.System storage 28 may include that at least one program produces Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform of the invention each The function of embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store and store in such as system In device 28, such program module 42 includes but is not limited to operating system, one or more application program, other program modules And program data, it may include the realization of network environment in each of these examples or certain combination.Program module 42 Usually execute the function and/or method in embodiment described in the invention.
Equipment 12 can also be communicated with one or more external equipments 14 (such as keyboard, sensing equipment, display 24 etc.), Can also be enabled a user to one or more equipment interacted with the equipment 12 communication, and/or with enable the equipment 12 with One or more of the other any equipment (such as network interface card, modem etc.) communication for calculating equipment and being communicated.It is this logical Letter can be carried out by input/output (I/O) interface 22.Also, equipment 12 can also by network adapter 20 and one or The multiple networks of person (such as local area network (LAN), wide area network (WAN) and/or public network, such as internet) communication.As shown, Network adapter 20 is communicated by bus 18 with other modules of equipment 12.It should be understood that although not shown in the drawings, can combine Equipment 12 use other hardware and/or software module, including but not limited to: microcode, device driver, redundant processing unit, External disk drive array, RAID system, tape drive and data backup storage system etc..
Processing unit 16 by the program that is stored in system storage 28 of operation, thereby executing various function application and Data processing, such as realize the attitude determination method of capture apparatus provided by the embodiment of the present invention, namely: obtain figure to be identified Picture;The images to be recognized is inputted into gesture recognition model, when obtaining shooting the images to be recognized, the posture number of capture apparatus According to;Wherein, the gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use the capture apparatus, And each attitude data for being continuously shot tracing point, it is trained and obtains;
Or realize the methods of exhibiting of scene image provided in an embodiment of the present invention, and namely: images to be recognized is obtained, and is obtained Three-dimensional scene images corresponding with the images to be recognized;The images to be recognized is inputted into gesture recognition model, is shot When the images to be recognized, the attitude data of capture apparatus;In the three-dimensional scene images, the appearance of the capture apparatus is obtained The corresponding scene image of state data;Using the scene image as starting point, three dimensional navigation image is exported;Wherein, the gesture recognition Model is by using being continuously shot the corresponding location drawing picture of tracing point and each described being continuously shot rail for the capture apparatus The attitude data of mark point, type, which is trained, to be obtained.
Embodiment six
The embodiment of the present invention six additionally provides a kind of computer readable storage medium, is stored thereon with computer program, It is characterized in that, the posture determination side of the capture apparatus as described in any embodiment of that present invention is realized when which is executed by processor Method, namely: obtain images to be recognized;The images to be recognized is inputted into gesture recognition model, obtains shooting the figure to be identified When picture, the attitude data of capture apparatus;Wherein, the gesture recognition model is continuously shot track by use the capture apparatus The corresponding location drawing picture of point and each attitude data for being continuously shot tracing point, are trained and obtain;
Or realize the methods of exhibiting of scene image provided in an embodiment of the present invention, and namely: images to be recognized is obtained, and is obtained Three-dimensional scene images corresponding with the images to be recognized;The images to be recognized is inputted into gesture recognition model, is shot When the images to be recognized, the attitude data of capture apparatus;In the three-dimensional scene images, the appearance of the capture apparatus is obtained The corresponding scene image of state data;Using the scene image as starting point, three dimensional navigation image is exported;Wherein, the gesture recognition Model is by using being continuously shot the corresponding location drawing picture of tracing point and each described being continuously shot rail for the capture apparatus The attitude data of mark point, is trained and obtains.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or Device, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: tool There are electrical connection, the portable computer diskette, hard disk, random access memory (RAM), read-only memory of one or more conducting wires (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage Medium can be any tangible medium for including or store program, which can be commanded execution system, device or device Using or it is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, Further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion Divide and partially executes or executed on a remote computer or server completely on the remote computer on the user computer.? Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as mentioned using Internet service It is connected for quotient by internet).
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (12)

1. a kind of attitude determination method of capture apparatus characterized by comprising
Obtain images to be recognized;
The images to be recognized is inputted into gesture recognition model, when obtaining shooting the images to be recognized, the posture of capture apparatus Data;
Wherein, the gesture recognition model is continuously shot the corresponding location drawing of tracing point by use the capture apparatus Picture and each attitude data for being continuously shot tracing point, are trained and obtain.
2. the method according to claim 1, wherein the training process of the gesture recognition model includes:
Obtain the attitude data for being continuously shot tracing point of the setting quantity of the capture apparatus, wherein the posture number According to including at least each position spin matrix and position translation matrix for being continuously shot tracing point;
According to each position spin matrix for being continuously shot tracing point, each position quaternary for being continuously shot tracing point is obtained Number;
In Same Scene, obtains and each described be continuously shot the corresponding location drawing picture of tracing point;
It using the position quaternary number, the position translation matrix and the location drawing picture, is trained, obtains the posture and know Other model.
3. according to the method described in claim 2, it is characterized in that, described use the position quaternary number, the position translation Matrix and the location drawing picture, are trained, and obtain the gesture recognition model, comprising:
Using each location drawing picture as input, while will be by each position quaternary number for being continuously shot tracing point Each 7 dimensional vector with position translation matrix composition is trained respectively as label using gradient descent method, is obtained described Gesture recognition model.
4. method according to any one of claim 1-3, which is characterized in that the images to be recognized, with the position The picture material classification of image is identical.
5. according to the method described in claim 4, it is characterized in that, described image content type includes at least:
Streetscape class, gardens class, Desert and valley class.
6. the method according to claim 1, wherein the gesture recognition model is by using the capture apparatus It is continuously shot the corresponding location drawing picture of tracing point and each attitude data for being continuously shot tracing point, is trained And obtain, it specifically includes:
The gesture recognition model is continuously shot the corresponding location drawing picture of tracing point by use the capture apparatus, and Each attitude data for being continuously shot tracing point, is trained as Logic Regression Models Logic Regression Models.
7. a kind of methods of exhibiting of scene image characterized by comprising
Images to be recognized is obtained, and obtains three-dimensional scene images corresponding with the images to be recognized;
The images to be recognized is inputted into gesture recognition model, when obtaining shooting the images to be recognized, the posture of capture apparatus Data;
In the three-dimensional scene images, the corresponding scene image of attitude data of the capture apparatus is obtained;
Using the scene image as starting point, three dimensional navigation image is exported;
Wherein, the gesture recognition model is continuously shot the corresponding location drawing of tracing point by use the capture apparatus Picture and each attitude data for being continuously shot tracing point, are trained and obtain.
8. the method according to the description of claim 7 is characterized in that output is three-dimensional described using the scene image as starting point Before image browsing, further includes:
Obtain the picture browsing direction of user's input;
Correspondingly, described using the scene image as starting point, three dimensional navigation image is exported, is specifically included:
Using the scene image as starting point, and according to described image direction of view, three dimensional navigation image is exported.
9. a kind of posture determining device of capture apparatus characterized by comprising
Image collection module, for obtaining images to be recognized;
Attitude data obtains module, for the images to be recognized to be inputted gesture recognition model, obtains shooting described to be identified When image, the attitude data of capture apparatus;
Wherein, the gesture recognition model is continuously shot the corresponding location drawing of tracing point by use the capture apparatus Picture and each attitude data for being continuously shot tracing point, are trained and obtain.
10. a kind of displaying device of scene image characterized by comprising
Three-dimensional scene images obtain module, for obtaining images to be recognized, and obtain three-dimensional corresponding with the images to be recognized Scene image;
Equipment attitude data obtains module, for the images to be recognized to be inputted gesture recognition model, obtain shooting it is described to When identifying image, the attitude data of capture apparatus;
Scene image obtains module, and the attitude data in the three-dimensional scene images, obtaining the capture apparatus is corresponding Scene image;
Picture browsing module, for exporting three dimensional navigation image using the scene image as starting point;
Wherein, the gesture recognition model is continuously shot the corresponding location drawing of tracing point by use the capture apparatus Picture and each attitude data for being continuously shot tracing point, are trained and obtain.
11. a kind of equipment, which is characterized in that the equipment includes:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now such as attitude determination method of capture apparatus as claimed in any one of claims 1 to 6, or as described in any in claim 7 and 8 Scene image methods of exhibiting.
12. a kind of storage medium comprising computer executable instructions, the computer executable instructions are by computer disposal For executing the attitude determination method such as capture apparatus as claimed in any one of claims 1 to 6, or such as claim 7 when device executes With 8 in any scene image methods of exhibiting.
CN201910458229.2A 2019-05-29 2019-05-29 Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium Pending CN110175579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910458229.2A CN110175579A (en) 2019-05-29 2019-05-29 Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910458229.2A CN110175579A (en) 2019-05-29 2019-05-29 Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN110175579A true CN110175579A (en) 2019-08-27

Family

ID=67696351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910458229.2A Pending CN110175579A (en) 2019-05-29 2019-05-29 Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110175579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209052A (en) * 2022-07-08 2022-10-18 维沃移动通信(深圳)有限公司 Image screening method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539942A (en) * 2009-04-30 2009-09-23 北京瑞汛世纪科技有限公司 Method and device for displaying Internet content
WO2016044778A1 (en) * 2014-09-19 2016-03-24 Hamish Forsythe Method and system for an automatic sensing, analysis, composition and direction of a 3d space, scene, object, and equipment
CN106650723A (en) * 2009-10-19 2017-05-10 Metaio有限公司 Method for determining the pose of a camera and for recognizing an object of a real environment
CN107016351A (en) * 2017-03-10 2017-08-04 北京小米移动软件有限公司 Shoot the acquisition methods and device of tutorial message
CN107093191A (en) * 2017-03-06 2017-08-25 阿里巴巴集团控股有限公司 A kind of verification method of image matching algorithm, device and computer-readable storage medium
US20180276500A1 (en) * 2017-03-27 2018-09-27 Fujitsu Limited Image processing apparatus, image processing method, and image processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539942A (en) * 2009-04-30 2009-09-23 北京瑞汛世纪科技有限公司 Method and device for displaying Internet content
CN106650723A (en) * 2009-10-19 2017-05-10 Metaio有限公司 Method for determining the pose of a camera and for recognizing an object of a real environment
WO2016044778A1 (en) * 2014-09-19 2016-03-24 Hamish Forsythe Method and system for an automatic sensing, analysis, composition and direction of a 3d space, scene, object, and equipment
CN107093191A (en) * 2017-03-06 2017-08-25 阿里巴巴集团控股有限公司 A kind of verification method of image matching algorithm, device and computer-readable storage medium
CN107016351A (en) * 2017-03-10 2017-08-04 北京小米移动软件有限公司 Shoot the acquisition methods and device of tutorial message
US20180276500A1 (en) * 2017-03-27 2018-09-27 Fujitsu Limited Image processing apparatus, image processing method, and image processing program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李克昭: "《CCD/GNSS多传感器融合导航定位的关键技术》", 31 December 2018, 西北工业大学出版社 *
李建等: "《虚拟现实(VR)技术与应用》", 31 January 2018, 河南大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209052A (en) * 2022-07-08 2022-10-18 维沃移动通信(深圳)有限公司 Image screening method and device, electronic equipment and storage medium
CN115209052B (en) * 2022-07-08 2023-04-18 维沃移动通信(深圳)有限公司 Image screening method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109618222B (en) A kind of splicing video generation method, device, terminal device and storage medium
US11488355B2 (en) Virtual world generation engine
CN108334627B (en) Method and device for searching new media content and computer equipment
US10970334B2 (en) Navigating video scenes using cognitive insights
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN109582880A (en) Interest point information processing method, device, terminal and storage medium
Tompkin et al. Videoscapes: exploring sparse, unstructured video collections
CN110163903A (en) The acquisition of 3-D image and image position method, device, equipment and storage medium
CN108509621B (en) Scenic spot identification method, device, server and storage medium for scenic spot panoramic image
CN108805917A (en) Sterically defined method, medium, device and computing device
CN108805979A (en) A kind of dynamic model three-dimensional rebuilding method, device, equipment and storage medium
CN103471581B (en) For providing the apparatus and method for the 3D maps for showing region-of-interest in real time
CN109543680A (en) Location determining method, appliance arrangement and the medium of point of interest
US20110043522A1 (en) Image-based lighting simulation for objects
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN112927363A (en) Voxel map construction method and device, computer readable medium and electronic equipment
WO2013167157A1 (en) Browsing and 3d navigation of sparse, unstructured digital video collections
CN112017304B (en) Method, apparatus, electronic device and medium for presenting augmented reality data
CN110175579A (en) Attitude determination method, the methods of exhibiting of scene image, device, equipment and medium
CN109711340A (en) Information matching method, device, instrument and server based on automobile data recorder
CN114089836B (en) Labeling method, terminal, server and storage medium
CN110109591A (en) A kind of picture editing method and device
CN109559382A (en) Intelligent guide method, apparatus, terminal and medium
CN108763440A (en) A kind of image searching method, device, terminal and storage medium
CN109923540A (en) The gesture and/or sound for modifying animation are recorded in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211025

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right