CN114882107A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN114882107A
CN114882107A CN202210365226.6A CN202210365226A CN114882107A CN 114882107 A CN114882107 A CN 114882107A CN 202210365226 A CN202210365226 A CN 202210365226A CN 114882107 A CN114882107 A CN 114882107A
Authority
CN
China
Prior art keywords
data
coordinate
contour
coordinate system
calibration parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210365226.6A
Other languages
Chinese (zh)
Inventor
段勇
王茜莺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202210365226.6A priority Critical patent/CN114882107A/en
Publication of CN114882107A publication Critical patent/CN114882107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a data processing method and a device, wherein the method comprises the following steps: obtaining first attitude data; wherein the first pose data comprises pose data of a first contour projected on a projection surface; the first contour corresponds to at least a partial contour of a first object in first data acquired by an image acquisition device; determining a first calibration parameter based on the first attitude data; the first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located; and processing the first attitude data based on the first calibration parameters to obtain second attitude data.

Description

Data processing method and device
Technical Field
The present application relates to the field of image data processing technologies, and in particular, to a data processing method and apparatus.
Background
In practical applications, it is usually necessary to calibrate the coordinate transformation relationship between the coordinate system of the camera and the coordinate system of the projection plane by means of a known number of calibration references which are arranged relative to the projection plane, have a specified size, and have a specified relative position relationship, so that the pose of the object in the image data acquired by the camera can be further adjusted based on the coordinate transformation relationship. However, such calibration methods rely strictly on calibration references and are therefore not flexible enough.
Disclosure of Invention
Based on the above problems, embodiments of the present application provide a data processing method and apparatus.
The technical scheme provided by the embodiment of the application is as follows:
the embodiment of the application provides a data processing method, wherein the method comprises the following steps:
obtaining first attitude data; wherein the first pose data comprises pose data of a first contour projected on a projection surface; the first contour corresponds to at least a partial contour of a first object in first data acquired by an image acquisition device;
determining a first calibration parameter based on the first attitude data; the first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located;
and processing the first attitude data based on the first calibration parameters to obtain second attitude data.
In some embodiments, said determining a first calibration parameter based on said first attitude data comprises:
determining target attitude data;
and determining the first calibration parameter based on the corresponding relation between the first attitude data and the target attitude data.
In some embodiments, the determining target pose data comprises:
obtaining attitude adjustment data in response to the attitude adjustment operation; wherein the pose adjustment operation comprises at least an operation of pose adjustment for at least one dimension of the first contour;
determining the target pose data based on the pose adjustment data.
In some embodiments, the determining target pose data comprises:
obtaining attitude data of the first object;
determining the target pose data as pose data of the first object.
In some embodiments, the obtaining first pose data comprises:
determining a second calibration parameter; the second calibration parameter comprises a coordinate conversion parameter between the first coordinate system and a third coordinate system; the third coordinate system comprises a coordinate system taking a specified coordinate point on the projection surface as a coordinate origin;
processing the profile data of the second profile based on the second calibration parameters to obtain the profile data of the first profile; wherein the contour data of the second contour comprises image data corresponding to the at least part of the contour in the first data;
and analyzing the profile data of the first profile to obtain the first attitude data.
In some embodiments, the determining the second calibration parameter includes:
determining a third calibration parameter between the third coordinate system and a fourth coordinate system; wherein the fourth coordinate system includes a coordinate system including any three coordinate points on the projection surface excluding the specified point; the fourth coordinate system takes one of the arbitrary three coordinate points as a coordinate origin; the arbitrary three coordinate points are not collinear;
and determining the second calibration parameter based on the third calibration parameter.
In some embodiments, said determining said second calibration parameter based on said third calibration parameter comprises:
obtaining first coordinate data and second coordinate data of the arbitrary three coordinate points; wherein the first coordinate data includes coordinate data of the arbitrary three coordinate points in the first coordinate system; the second coordinate data includes coordinate data of the arbitrary three coordinate points in the fourth coordinate system;
determining a fourth calibration parameter between the first coordinate data and the second coordinate data;
and processing the fourth calibration parameter based on the third calibration parameter to determine the second calibration parameter.
In some embodiments, the processing the profile data of the second profile based on the second calibration parameter to obtain the profile data of the first profile includes:
view transformation is carried out on the contour data of the second contour based on the second calibration parameters to obtain view contour data;
acquiring projection parameters of the projection surface;
and carrying out orthogonal projection processing on the view profile data based on the projection parameters to obtain the profile data of the first profile.
In some embodiments, the method further comprises:
configuring and rendering the target contour and the second object to obtain target data; wherein the second object comprises at least a virtual object; the attitude data of the target profile is the second attitude data;
obtaining the track information of the target contour on the projection surface;
and outputting the target data to the projection surface based on the track information.
An embodiment of the present application further provides a data processing apparatus, where the apparatus includes:
the determining module is used for obtaining first attitude data; wherein the first pose data comprises pose data of a first contour projected on a projection surface; the first contour corresponds to at least a partial contour of a first object in first data acquired by an image acquisition device;
the determining module is further configured to determine a first calibration parameter based on the first attitude data; the first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located;
and the processing module is used for processing the first attitude data based on the first calibration parameter to obtain second attitude data.
As can be seen from the above, the data processing method provided in the embodiment of the present application can determine, after obtaining the first pose data of the first contour projected on the projection surface, the first calibration parameter between the first coordinate system where the image capturing device is located and the second coordinate system where the projection surface is located based on the first pose data. Therefore, in the data processing method provided by the embodiment of the application, in the process of determining the first calibration parameter, not only is the dependence on the specified calibration object in the related technology eliminated, but also the first calibration parameter can be dynamically determined according to the first attitude data in real time, so that the first calibration parameter can be automatically and more flexibly determined.
Drawings
Fig. 1 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of determining a first calibration parameter according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of obtaining first posture data according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a principle of determining a second calibration parameter according to an embodiment of the present application;
fig. 5 is a schematic flowchart of obtaining profile data of a first profile according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of outputting target data to a projection surface according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In machine vision application, accurate calibration between a camera coordinate system where an image acquisition device is located and a projection coordinate system where a projection device is located plays a decisive influence on the effect of the machine vision application.
In practical applications, calibration between the camera coordinate system and the projection coordinate system is usually performed by means of at least two calibration objects with known dimensions, which are arranged on the projection apparatus, and under the condition that the arrangement position and the relative position between the at least two calibration objects need to meet strict calibration requirements, internal and external parameters of the image acquisition apparatus, that is, calibration of coordinate transformation parameters between the camera coordinate system and the projection coordinate system, can be performed by acquiring image data including the at least two calibration objects and position information of the at least two calibration objects in the projection apparatus by the image acquisition apparatus.
However, the above-mentioned calibration manner strictly depends on a specified number of at least two calibration objects arranged in a specified form, and when the size, shape or relative positional relationship of the calibration objects does not satisfy the calibration condition, the above-mentioned calibration operation cannot be performed; moreover, in the case that the relative position between the image capturing device and the projecting device changes, at least two calibration objects need to be reset to perform the calibration process again. Therefore, the flexibility of the above calibration method is insufficient.
Based on the above problems, embodiments of the present application provide a data processing method and apparatus. According to the data processing method provided by the embodiment of the application, after the attitude data of the first contour projected on the projection surface, namely the first attitude data, is obtained, the first calibration parameter can be determined based on the first attitude data.
Therefore, the data processing method provided by the embodiment of the application can determine the first calibration parameter based on the first posture data of the first contour projected on the projection surface, so that the first calibration data can be flexibly and automatically determined according to the posture data of the first contour projected on the projection surface, dependence on at least two calibration objects in related data during calibration is eliminated, even when the relative position between the image acquisition device and the projection surface changes, the first calibration parameter can be dynamically and accurately determined according to the first contour projected on the projection surface, and the flexibility of determining the first calibration parameter is further improved.
The data Processing method provided in the embodiment of the present Application may be implemented by a Processor of an electronic Device, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present application, and as shown in fig. 1, the flow may include steps 101 to 103:
step 101, obtaining first attitude data.
Wherein the first pose data comprises pose data of a first contour projected on the projection surface; the first contour corresponds to at least a partial contour of the first object in the first data acquired by the image acquisition device.
In one embodiment, the image capture device may include a two-dimensional camera, and accordingly, the first data may include a two-dimensional image; for example, the image acquisition device may comprise a depth camera, and correspondingly, the first data may comprise at least depth data of the first object.
In one embodiment, the number of the image acquisition devices may be plural; for example, the types of the respective image capturing devices may be different, for example, the first image capturing device may be a two-dimensional camera, and the second image capturing device may be a depth camera; for example, the relative arrangement of the plurality of image capturing devices and the projection surface may be different, for example, the distance between the first image capturing device and the projection surface is a first distance, and the distance between the second image capturing device and the projection surface is a second distance.
In one embodiment, the first object may include an object in a static state, such as an office table and chair; for example, the first object may include an object capable of moving or moving, such as a worker or an animal; for example, the number of the first objects may be plural, and at least one of the type, the motion state, and the posture of each of the first objects may be different.
In one embodiment, the at least partial contour of the first object may be obtained by analyzing a contour of the first object in the first data to obtain an overall contour of the first object, and then segmenting the overall contour of the first object.
In one embodiment, the at least partial contour of the first object may be at least a part of the first object in the first data is subjected to edge feature recognition, and the result of the edge feature recognition is determined as the at least partial contour of the first object.
In one embodiment, the projection surface may include a projection screen, which may, for example, reflect light projected onto the projection screen in at least one direction; illustratively, the projection screen may also be provided with image and/or video output capabilities.
In one embodiment, the projection surface may have a certain transparency, such as a holographic projection screen for displaying holographic teaching data in holographic podium somatosensory interaction.
In one embodiment, the projection surface may be a planar structure; for example, the projection surface may have a three-dimensional stereo structure, for example, the projection surface may be represented by a combination of a plane and a curved structure.
In one embodiment, the first pose data and the pose data of at least a partial contour of the first object in the first data may be the same, and at this time, the image capture device and the projection surface may be arranged in a specified manner; for example, the specifying manner may include that the image capturing device is located at the same horizontal plane with the geometric center point of the projection plane, and the lens and the optical center of the image capturing device are parallel to the projection plane.
In one embodiment, the first pose data may be different from pose data of at least a partial contour of the first object in the first data; for example, the pose of the first contour characterized by the first pose data may be different from the pose of at least one dimension of the pose of at least part of the contour, which is not limited in this application.
Illustratively, the first pose data may be implemented by:
if the first contour is detected to be projected to the projection surface, acquiring the first contour projected on the projection surface to obtain an acquisition result, and performing feature recognition on the first contour in the acquisition result to obtain attitude data of the first contour; for example, the feature recognition of the first contour in the acquisition result can be realized through a neural network; illustratively, the acquisition result may include image data or video data; illustratively, the first contour projected on the projection surface may be acquired by a two-dimensional image acquisition device.
And 102, determining a first calibration parameter based on the first attitude data.
The first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located.
In one embodiment, the first coordinate system may be a three-dimensional coordinate system with the optical center of the image capturing device as the origin of coordinates.
In one embodiment, the second coordinate system may include a two-dimensional coordinate system having a coordinate point associated with the specified area on the projection surface as a coordinate origin; for example, the coordinate point associated with the designated area may include a geometric center point of the designated area, or may include a point on an edge of the designated area, which is not limited in this embodiment of the present application; for example, the designated area may include a fixed area on the projection surface, or may include an area on the projection surface that can be dynamically adjusted.
In one embodiment, the number of the designated areas may be at least one, and the first calibration parameter may include a set of the same number of calibration parameters as the number of the designated areas under the condition that the number of the designated areas is plural.
In one embodiment, the designated area may be set randomly or preset according to the calibration requirement; for example, the designated area may be set according to at least one of the setting mode of the projection surface, the setting position of the projection surface relative to the image acquisition device, the geometry of the projection surface, the projection property of the projection surface, the viewing requirement of the viewing user, and the projection field environment.
In one embodiment, the geometry of the designated area and the first contour may be the same; for example, the coordinate positions of the designated area and the first contour on the projection plane may be different, for example, the geometric center point of the designated area is located at a first coordinate point on the projection plane, and the geometric center point of the first contour is located at a second coordinate point on the projection plane; for example, the geometric shape of the designated area and the first contour may be the same size, such as the designated area and the first contour are circles of the same size; for example, the designated area and the first contour may have different sizes, that is, the designated area and the first contour may have the same geometric shape but different areas.
In one embodiment, the designated area and the first contour may be at least partially overlapping; for example, in the case where the shape and size of the designated area and the first contour are the same, the designated area and the first contour may be overlapped seamlessly; for example, under the condition that the designated region and the first contour are similar contours having the same geometric shape, if the area of the designated region is smaller than the first contour, the set of coordinate points of the designated region may be a proper subset of the set of coordinate points of the first contour, and correspondingly, if the area of the designated region is larger than the first contour, the set of coordinate points of the first contour may be a proper subset of the set of coordinate points of the designated region; for example, the designated area and the first contour may partially overlap, for example, the set of coordinate points of the designated area and the set of coordinate points of the first contour have non-empty intersections, and the non-empty intersections are proper subsets of the set of coordinate points of the designated area and the set of coordinate points of the first contour, respectively. Each of the above coordinate point sets is a set of coordinate points on the projection surface.
In one embodiment, the designated area and the first contour may be completely separated, i.e., there is no intersection between the set of coordinate points of the designated area and the set of coordinate points of the first contour.
In one embodiment, the first contour may have the same pose as the designated area, and for example, the first pose data and the second pose data may be the same if the first contour seamlessly overlaps the designated area, and accordingly, the first pose data and the second pose data may have at least one dimension of pose difference if the first contour does not seamlessly overlap the designated area.
In one embodiment, the first calibration parameters may include coordinate conversion parameters of a first set of coordinate points in a first coordinate system and a second set of coordinate points in a second coordinate system; for example, the first set of coordinate points may include a set of at least three coordinate points in a first coordinate system; the second set of coordinate points may include the same number of sets of coordinate points in a second coordinate system as in the first set of coordinate points.
In one embodiment, the first calibration parameters may be embodied in a matrix.
For example, determining the first calibration parameter based on the first attitude data may be implemented by any one of the following manners:
analyzing the first attitude data, determining a designated contour corresponding to the designated part of the first object in the first contour, performing feature detection on the designated contour corresponding to the designated part of the first object, determining a distortion parameter of the designated contour, inverting the distortion parameter to obtain an inverse distortion parameter, and determining the inverse distortion parameter as a first calibration parameter.
Analyzing the first attitude data, determining the relative relation between the first contour and at least two designated parts of the first object, analyzing, determining the change degree parameter of the relative relation between the at least two designated parts, then inverting the change degree parameter, and determining the inversion result of the change degree parameter as the first calibration parameter.
And 103, processing the first attitude data based on the first calibration parameters to obtain second attitude data.
For example, the first calibration parameter and the first attitude data may both be embodied in a matrix form, so that the second attitude data may be obtained through data operation between the matrices; correspondingly, the second posture data may also be embodied in a matrix form, which is not limited in the embodiment of the present application.
In one embodiment, the pose of at least one dimension of the first pose data may be adjusted based on the first calibration parameters, thereby obtaining second pose data.
In one embodiment, a calibration parameter corresponding to the gesture to be adjusted may be determined from the first calibration parameters, and the first gesture data may be processed based on the calibration parameter corresponding to the gesture to be adjusted, so as to obtain the second gesture data.
As can be seen from the above, the data processing method provided in the embodiment of the present application can determine, after obtaining the first pose data of the first contour projected on the projection surface, the first calibration parameter between the first coordinate system where the image capturing device is located and the second coordinate system where the projection surface is located based on the first pose data. Therefore, in the data processing method provided by the embodiment of the application, in the process of determining the first calibration parameter, not only is the dependence on the specified calibration object in the related technology eliminated, but also the first calibration parameter can be dynamically determined according to the first attitude data in real time, so that the first calibration parameter can be automatically and more flexibly determined.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, determining the first calibration parameter based on the first posture data may be implemented by the process shown in fig. 2, and fig. 2 is a schematic flowchart of the process for determining the first calibration parameter provided in the embodiment of the present application, as shown in fig. 2, the process may include steps 1021 to 1022:
and step 1021, determining target posture data.
In one embodiment, the target attitude data may be preset based on actual calibration requirements; for example, the target pose data may be stored in a memory space of the electronic device; for example, the target posture data may be determined by the electronic device from calibration indication information sent by a management device with which the electronic device establishes a communication connection, which is not limited in this embodiment of the present application.
In one embodiment, the target pose data may be embodied in the same form as the first pose data, for example, the target pose data and the first pose data may be embodied in the form of a matrix having the same dimensions.
In one embodiment, the target pose data may include textual data describing the calibration requirements; for example, the target pose data may include adjusting a first location in the first contour to a first pose and adjusting a second location in the first contour to a second pose.
In one embodiment, the target pose data may include condition information in the calibration requirements; for example, the target pose data may include that if the size of the third portion in the first contour is a designated size, the calibration operation is performed on the contour portion corresponding to the third portion in the first contour, and correspondingly, if the size of the third portion of the first contour is different from the designated size, the calibration operation may not be performed on the third portion of the first contour that is not the corresponding contour portion.
And 1022, determining a first calibration parameter based on the corresponding relationship between the first attitude data and the target attitude data.
In one embodiment, the correspondence between the first pose data and the target pose data may include a correspondence between a pose of the target portion in the first pose data and a pose of the target portion in the target pose data; for example, in the case where the first object is a person, the target portion may include a hand of the person, and at this time, the posture of the hand in the first posture data and the posture of the hand in the target posture data may have a correspondence relationship therebetween.
In one embodiment, the correspondence between the first posture data and the target posture data may include a matching relationship between the posture of the target portion in the first state data and the posture of the target portion in the target posture data, for example, if the hand posture in the first posture data is that the finger is in a horizontal posture and the hand posture in the target posture data is that the finger is in a vertical posture, then the correspondence between the first posture data and the target posture data at this time may include a weaker matching degree between the finger postures of the hands in the two, that is, a difference degree between the finger postures of the hands in the two is 90 degrees or 270 degrees.
For example, determining the first calibration parameter based on the corresponding relationship between the first posture data and the target posture data may be implemented by any of the following manners:
acquiring the attitude information of at least one part in the first attitude data, acquiring the attitude information of at least one part corresponding to the target attitude data, matching the attitude information of at least one part in the first attitude data with the attitude information of at least one part corresponding to the target attitude data to obtain the attitude difference of at least one part in the first attitude data and the target attitude data, quantizing the attitude difference to obtain a quantization result, and determining the quantization result as a first calibration parameter.
And acquiring the contour dimension information of at least one part in the first attitude data, acquiring the contour dimension information of at least one corresponding part in the target attitude data, then carrying out statistics and quantitative settlement on the difference between the two contour dimension information, and determining the result of the statistics and quantitative calculation as a first calibration parameter.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, after the target posture data is determined, the first calibration parameter can be determined based on the corresponding relationship between the first posture data and the target posture data. Therefore, under the condition that the target attitude data is controllable and adjustable, the first calibration parameters meeting various different calibration requirements can be obtained, so that various different calibration requirements and/or calibration scenes can be met, and the flexibility of determining the first calibration parameters is further improved.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, determining the target pose data may be implemented through steps a1 to a 2:
and step A1, responding to the posture adjustment operation to obtain posture adjustment data.
Wherein the gesture adjustment operation comprises at least an operation of gesture adjustment for at least one dimension of the first contour.
In one embodiment, the gesture adjustment operation may include a text input operation detected in a text input window created by a human-computer interaction mechanism of the electronic device; accordingly, the pose adjustment data may include text data for pose adjustment of at least one dimension of the first outline, such as adjusting a finger of the first object in the first outline to a horizontal pose; illustratively, the pose adjustment data may also include a combination of numeric and textual pose adjustment data for at least one dimension of at least one portion of the first outline, such as adjusting a thumb of a first object in the first outline to a pose at 15 degrees from horizontal.
In one embodiment, the gesture adjustment operation may include a touch adjustment operation such as a slide adjustment operation for a gesture of at least one part of the first contour input to the projection surface; accordingly, the pose adjustment data may include a sliding magnitude obtained by tracking and detecting the sliding adjustment operation, where the magnitude of the sliding magnitude may represent an adjustment magnitude of a pose adjustment dimension for which the sliding adjustment operation is directed.
Step A2, determining target posture data based on the posture adjustment data.
In one embodiment, if only data for one-dimensional pose adjustment for one part of the first contour is included in the pose adjustment data, the pose adjustment data may be target pose data; for example, under the condition that the posture adjustment data is text data or a text-numeral combination, the text data or the text-numeral combination may be subjected to recognition conversion processing, so as to obtain target posture data.
In one embodiment, if the pose adjustment data includes data for performing pose adjustment on at least one dimension of at least two portions of the first contour, the pose adjustment data may be sorted and combined according to a relative position relationship of the at least two portions in the first contour, and a result of the sorting and combining may be determined as the target pose data.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, the posture adjustment data can be obtained in response to the posture adjustment operation, and then the target posture data is determined based on the posture adjustment data. The attitude adjustment operation can be flexibly switched according to different calibration requirements or calibration scenes, so that the determination mode of the target attitude data is more controllable and more flexible; and, since the target pose data is determined in response to the pose adjustment operation, interactive determination of the target pose data is achieved.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, determining the target pose data may also be implemented through step B1 to step B2:
and step B1, obtaining the attitude data of the first object.
In one embodiment, the pose data of the first object may include data representing an actual pose of the first object; for example, the pose data of the first object may represent an actual pose of the first object with respect to a specified reference object, wherein the specified reference object may include the projection surface and/or the image capture device, and may include objects other than the projection surface and the image capture device.
In one embodiment, the pose data of the first object may represent an overall pose of the first object, which in the case of a human being may include the human being in a jumping-up pose; for example, the pose data of the first object may include a pose of a part of the first object, for example, in the case of a person, the pose data of the first object may include a pose of an arm swing.
In one embodiment, the pose data of the first object may be determined by performing an analytical recognition of at least one portion of the first object in the first data acquired by the image acquisition device.
In one embodiment, the pose data of the first object may be obtained by: acquiring an image or video including a first object by a two-dimensional image acquisition device, then performing feature recognition on the image or video including the first object, and determining the result of the feature recognition as attitude data of the first object; for example, the image or video including the first object may include a designated reference object, and thus, by performing feature recognition on the designated reference object in the image or video including the first object and the first object, the posture data of the first object relative to the designated reference object, that is, the posture data of the first object, may be determined.
And step B2, determining the target posture data as the posture data of the first object.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, after obtaining the posture data of the first object, it can be determined that the target posture data is the posture data of the first object. In this way, the posture of the first object is adjustable and controllable, so that the target posture data is also in an adjustable and controllable state, and the flexibility and controllability of the determination of the first calibration parameter can be further improved; in addition, the attitude data of the first object represents the actual attitude of the first object, so that the target attitude data is consistent with the actual attitude of the first object, and the accuracy of the target attitude data is improved.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, obtaining the first posture data may be implemented by using the flowchart shown in fig. 3, and fig. 3 is a schematic flowchart of the process for obtaining the first posture data provided in the embodiment of the present application, as shown in fig. 3, the process may include steps 1011 to 1013:
and step 1011, determining a second calibration parameter.
The second calibration parameter comprises a coordinate conversion parameter between the first coordinate system and the third coordinate system; the third coordinate system includes a coordinate system having a specified coordinate point on the projection surface as a coordinate origin.
In one embodiment, the specified coordinate points on the projection surface may include coordinate points on an edge of the projection surface; for example, the specified coordinate point on the projection surface may include a geometric center point of the projection surface, or a point having a specified relative position relationship with at least two edges or the geometric center point of the projection surface, which is not limited in this embodiment of the application.
In one embodiment, under the condition that the projection plane is a plane, the third coordinate system and the second coordinate system may both be two-dimensional coordinate systems located in the plane of the projection plane; for example, the third coordinate system and the second coordinate system may have a corresponding relationship of displacement and/or rotation; for example, the third coordinate system and the second coordinate system may include a three-dimensional coordinate system of a plane where the projection plane is located, wherein, for example, x-axes and y-axes of the third coordinate system and the second coordinate system are respectively located in the projection plane, and z-axes of the third coordinate system and the second coordinate system may be parallel to a normal direction of the projection plane.
In one embodiment, under the condition that the origin of coordinates of the second coordinate system is the same as the origin of coordinates of the third coordinate system, the second coordinate system and the third coordinate system may have a rotational correspondence relationship; for example, the second coordinate system and the third coordinate system may be the same coordinate system under the condition that the x-axis and the y-axis of the second coordinate system coincide with the x-axis and the y-axis of the third coordinate system, respectively.
In an implementation manner, the second calibration parameter may be embodied in a matrix form, which is not limited in this application.
For example, the second calibration parameter may be determined by:
the method comprises the steps of setting an identification of a specified coordinate point on a projection surface, establishing a third coordinate system based on the identification, determining and identifying at least two coordinate points except the specified coordinate point in the third coordinate system, collecting image data including the specified coordinate point on the projection surface and the at least two coordinate points except the specified coordinate point through an image collecting device, analyzing the image data to obtain the specified coordinate point on the projection surface and coordinate information of the at least two coordinate points except the specified coordinate point in a first coordinate system, and determining a second calibration parameter based on the coordinate information of the three coordinate points in the first coordinate system and the corresponding relation of the coordinate information of the three coordinate points in the third coordinate system.
And 1012, processing the profile data of the second profile based on the second calibration parameters to obtain the profile data of the first profile.
And the contour data of the second contour comprises image data corresponding to at least part of the contour in the first data.
In one embodiment, the contour data of the second contour may be obtained by performing feature extraction on the first data to obtain at least a part of the contour, and then determining according to depth information of at least a part of the contour.
For example, the processing of the profile data of the second profile based on the second calibration parameter may be implemented by:
and performing cross multiplication processing on the coordinate information corresponding to the profile data of the second profile based on the matrix corresponding to the second calibration parameter, so as to obtain the profile data of the first profile.
And 1013, analyzing the contour data of the first contour to obtain first posture data.
In one embodiment, the first pose data may be obtained by performing overall pose feature recognition on the profile data of the first profile.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, the posture data of the first contour projected on the projection surface is determined by processing the image data corresponding to at least part of the contour in the first data acquired by the image acquisition device by using the second calibration parameter between the first coordinate system in which the image acquisition device is located and the third coordinate system in which the specified coordinate point on the projection surface is used as the origin of coordinates, to obtain the contour data of the first contour, and then analyzing the contour data of the first contour. Therefore, the relative position relation between the image acquisition device and the projection surface is relatively clear, so that the second calibration parameters can objectively reflect the relative position relation between the image acquisition device and the projection surface, the contour data of the first contour obtained by processing the contour data of the second contour based on the second calibration parameters can comprehensively and objectively contain the relative position relation between the image acquisition device and the projection surface, and the objectivity and accuracy of the first posture data are improved.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, determining the second calibration parameter may be implemented through steps C1 to C2:
and step C1, determining a third calibration parameter between the third coordinate system and the fourth coordinate system.
The fourth coordinate system comprises a coordinate system comprising any three coordinate points on the projection surface except the appointed point; the fourth coordinate system takes one coordinate point of any three coordinate points as a coordinate origin; any three coordinate points are not collinear.
In one embodiment, any three coordinate points may be marked on the projection surface in advance; for example, any three coordinate points may be coordinate points with higher identification on the projection surface, such as coordinate points on intersecting edges on the projection surface.
In one embodiment, under the condition that the projection plane is a plane, the fourth coordinate system and the third coordinate system and the second coordinate system may be two-dimensional coordinate systems in the plane of the projection plane; illustratively, the third coordinate system and the fourth coordinate system can be adjusted into the second coordinate system by rotation and/or translation; for example, the third coordinate system and the fourth coordinate system may include a three-dimensional coordinate system of a plane where the projection plane is located, wherein, for example, x-axes and y-axes of the third coordinate system and the fourth coordinate system are respectively located in the projection plane, and z-axes of the third coordinate system and the fourth coordinate system may be parallel to a normal direction of the projection plane.
For example, the third calibration parameter may be implemented by:
determining the relative position relationship among the arbitrary three coordinate points, and after determining the coordinate origin of the fourth coordinate system and establishing the fourth coordinate system, determining the coordinates of the arbitrary three coordinate points in the fourth coordinate system to obtain a third coordinate point set; and determining coordinate points corresponding to the any three coordinate points in a third coordinate system based on the relative position relationship among the any three coordinate points, determining coordinates of the coordinate points corresponding to the any three coordinate points in the third coordinate system to obtain a fourth coordinate point set, and determining a third calibration parameter based on the coordinates of corresponding points in the third coordinate point set and the fourth coordinate point set.
And step C2, determining a second calibration parameter based on the third calibration parameter.
For example, determining the second calibration parameter based on the third calibration parameter may be implemented by any of the following:
and weighting at least part of the third calibration parameters, and determining the weighting result as the second calibration parameters.
And smoothing at least part of the third calibration parameters, and determining the smoothed result as the second calibration parameter.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, after the third calibration parameter between the third coordinate system and the fourth coordinate system is determined, the second calibration parameter can be determined based on the third calibration parameter. Since the fourth coordinate system includes any three coordinate points on the projection surface except the specified point, and takes one of the any three coordinate points as the origin of coordinates. Therefore, since the fourth coordinate system can represent any coordinate system on the projection surface, the third calibration parameter between the third coordinate system and the fourth coordinate system can represent the coordinate conversion parameter between the coordinate system with the designated coordinate point as the origin of coordinates on the projection surface and any coordinate system, that is, the third calibration parameter can more objectively and comprehensively represent the coordinate conversion parameter between different coordinate systems on the projection surface, and the second calibration parameter determined based on the third calibration parameter can comprehensively and objectively reflect the coordinate conversion relationship between the first coordinate system with the image acquisition device and the third coordinate system with the designated coordinate point on the projection surface as the origin.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, the determining of the second calibration parameter based on the third calibration parameter may be implemented through steps D1 to D3:
and D1, obtaining first coordinate data and second coordinate data of any three coordinate points.
The first coordinate data comprise coordinate data of any three coordinate points in a first coordinate system; the second coordinate data includes coordinate data of arbitrary three coordinate points in a fourth coordinate system.
In one embodiment, the first coordinate system may be a three-dimensional coordinate system, and thus, each of the first coordinate data may include coordinate data of three dimensions; for example, the fourth coordinate system may be a three-dimensional coordinate system, and thus, each of the fourth coordinate data may include coordinate data of three dimensions.
In one embodiment, the first coordinate data may be determined by the depth image data including any three coordinate points acquired by the image acquisition device and distance information of the image acquisition device relative to the projection plane.
In one embodiment, the second coordinate data may be determined after the fourth coordinate system is determined.
And D2, determining a fourth calibration parameter between the first coordinate data and the second coordinate data.
For example, the fourth calibration parameter may be determined based on a corresponding relationship between the coordinate data of the coordinate point in the first coordinate data and the coordinate data of the corresponding coordinate point in the fourth coordinate data, which is not limited in this embodiment of the application.
And D3, processing the fourth calibration parameter based on the third calibration parameter, and determining the second calibration parameter.
In one embodiment, the second calibration parameter may be determined by any of the following:
and performing at least one dimension correction processing on the fourth calibration parameter based on the third calibration parameter, and determining the correction processing result as the second calibration parameter.
And performing correction processing on the rotation and/or displacement dimension of the fourth calibration parameter based on the rotation and/or displacement information between the third coordinate system and the fourth coordinate system contained in the third calibration parameter, and determining the correction processing result as the second calibration parameter.
Fig. 4 is a schematic diagram of the principle of determining the second calibration parameter provided in the embodiment of the present application, and as shown in fig. 4, a first object 401 may be located between a projection plane 402 and an image acquisition device 403; the coordinate system where the image capturing device 403 is located may be the first coordinate system 404, and the origin of the coordinates of the first coordinate system 404 may be the optical center O of the image capturing device 403 c The first coordinate system 404 may include a coordinate system represented by O c X c 、O c Y c And O c Z c Forming a three-dimensional coordinate system.
In FIG. 4, the third coordinate system 405 may be centered on the geometric center O of the projection surface 402 s A three-dimensional coordinate system which is the origin of coordinates; wherein the third coordinate system 405 may include O s X s 、O s Y s And O s Z s A three-dimensional coordinate system of composition, wherein O s Z s May be a geometric center O passing through the plane of projection 402 s Is normal to the projection plane 402.
In FIG. 4, a fourth coordinate system 406 may include the projection plane 402 with the geometric center O removed s Any other three coordinate points, i.e. P 1 、P 2 And P 3 Wherein the origin of coordinates of the fourth coordinate system 406 may be P 1 (ii) a The fourth coordinate system 406 may include a coordinate system defined by O t X t 、O t Y t And O t Z t Formed three-dimensional coordinate system, O t Z t May be through P 1 Is normal to the projection plane 402.
In fig. 4, the first contour 407 may comprise the overall contour of the first object 401; illustratively, in FIG. 4, the second calibration parameter T CS May include coordinate conversion parameters between the first coordinate system 404 and the third coordinate system 405; third calibration parameter T TS May include coordinate conversion parameters between the third coordinate system 405 and the fourth coordinate system 406; fourth calibration parameter T CT Coordinate conversion parameters between the first coordinate system 404 and the fourth coordinate system 406 may be included.
For example, the third coordinate system 405 and the fourth coordinate system 406 may be determined based on the size information and/or the geometry information of the projection surface 402.
Illustratively, any three coordinate points, P, in the fourth coordinate system 406 may be determined 1 、P 2 And P 3 The second coordinate data; p acquired by the image acquisition device 403 and including any three coordinate points 1 、P 2 And P 3 The coordinate data of any three coordinate points in the first coordinate system 404, i.e. the first coordinate data, can be determined, so that the fourth calibration parameter T between the first coordinate system 404 and the fourth coordinate system 406 can be determined by the first coordinate data and the second coordinate data CT
For example, in fig. 4, the third calibration parameter T between the third coordinate system 405 and the fourth coordinate system 406 may be determined based on the second coordinate data and coordinate data of coordinate points corresponding to any three coordinate points in the third coordinate system 405 TS
Illustratively, the fourth calibration parameter T is determined CT And a third calibration parameter T TS Then, the third calibration parameter T can be used TS For the fourth calibration parameter T CT Processing is carried out to determine a second calibration parameter T CS
As can be seen from the above, in the data processing method provided in the embodiment of the present application, the second calibration parameter is determined based on the fourth calibration parameter between the first coordinate system and the fourth coordinate system and the third calibration parameter between the third coordinate system and the fourth coordinate system, that is, the second calibration parameter includes the coordinate conversion parameter between any coordinate system in the projection plane and the coordinate system with the designated coordinate point as the coordinate origin and the coordinate conversion parameter between any coordinate system in the first coordinate system and any coordinate system in the projection plane, namely, the second calibration parameter can not only represent the relative position relationship between the image acquisition device and the projection surface, but also represent the pose transformation relationship between any coordinate system on the projection surface and the third coordinate system, therefore, the second calibration parameter can accurately reflect the coordinate transformation relation between any point on the projection plane and the image acquisition device.
Based on the foregoing embodiment, in the data processing method provided in the embodiment of the present application, the profile data of the second profile is processed based on the second calibration parameter to obtain the profile data of the first profile, which may be implemented by a flow shown in fig. 5, where fig. 5 is a schematic flow diagram of obtaining the profile data of the first profile provided in the embodiment of the present application, and as shown in fig. 5, the flow may include steps 501 to 503:
and 501, carrying out view transformation on the outline data of the second outline based on the second calibration parameters to obtain view outline data.
In one embodiment, the view profile data may include three-dimensional profile data corresponding to profile data of the second profile.
For example, a triangular patch may be constructed based on spatial points corresponding to every three horizontally and vertically adjacent pixels in the first data, and the constructed triangular patches may be combined to obtain contour data of a second contour, which is a three-dimensional patch model of a first object, such as a human body, and the spatial position of the model is referenced to a coordinate system using the first coordinate system.
For example, the three-dimensional patch model of the first object, i.e. the contour data of the second contour, may be processed through a graphics rendering pipeline; for example, a model view transformation of the graphics rendering pipeline may be constructed based on the second calibration parameters, such that the view profile data may be obtained by processing the profile data of the geothermal profile through the graphics rendering pipeline.
And 502, acquiring projection parameters of the projection surface.
In one embodiment, the projection parameters of the projection surface may include at least one of physical size, structure, shape, and resolution of the projection surface.
And 503, performing orthogonal projection processing on the view outline data based on the projection parameters to obtain outline data of the first outline.
For example, the projection parameters may be set in a graphics rendering pipeline, and the orthogonal projection processing may be performed on the view profile data through the graphics rendering pipeline after the projection parameters are set, so as to obtain profile data of the first profile.
As an example, the projection effect of the profile data of the first profile on the projection surface may be as shown by the first profile 407 in fig. 4. As can be seen from fig. 7, the first contour 407 has a certain inclination with respect to the horizontal edge of the projection plane 402, and under the condition that the first object 401 stands vertically on a plane perpendicular to the projection plane 402, a deviation occurs between the posture of the first contour 407 and the actual posture of the first object 401.
For example, under the condition that the first object 401 is a teacher, the projection plane 402 is a holographic projection screen, and the image acquisition device 403 is a depth camera, after the depth information of the teacher is acquired by the depth camera, the depth information is processed by the graphics rendering pipeline based on the second calibration parameters and the projection parameters of the projection plane, and a first contour 407 which is a contour picture of the teacher that can be projected onto the projection plane is obtained.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, the profile data of the second profile can be subjected to view transformation based on the second calibration parameter to obtain the view profile data, and the view profile data can also be processed based on the projection parameter of the projection surface to obtain the profile data of the first profile, so that the profile data of the first profile not only corrects distortion generated by a spatial position relationship between the image acquisition device and the projection surface, but also meets an actual projection requirement of the projection surface, and thus, a distortion probability of the profile data of the first profile can be reduced.
Based on the foregoing embodiment, the data processing method provided in the embodiment of the present application may further include steps E1 to E3:
and E1, performing configuration rendering on the target contour and the second object to obtain target data.
Wherein the second object comprises at least a virtual object; the pose data of the target profile is second pose data.
In one embodiment, the pose of the first contour may be adjusted in at least one dimension based on the second pose data, and the pose-adjusted first contour may be determined as the target contour; for example, the pose-adjusted first contour, i.e., the target contour, may be the target contour 408 in fig. 4; illustratively, the target contour 408 may be the same as the actual pose of the first object 401.
In one embodiment, the second object may comprise at least one virtual object rendered by the electronic device; for example, in a case where the second object includes at least two virtual objects, at least one of the type, color, size, and relative positional relationship thereof with the first object may be different for the respective virtual objects.
In one embodiment, the second object may include a physical object and a virtual object; for example, in a case where the first object is a teacher, the second object may include at least one virtual object rendered by the electronic device, and at least one physical object; illustratively, the entity objects may include at least one of student objects and teaching aid objects.
In one approach, the target data may be obtained by first determining the configuration data, then determining the second object based on the configuration data, and then performing configuration rendering on the target contour 408 and the second object based on the configuration data; for example, the configuration data may include information configuring at least one of temporal order, spatial order, rendering color, rendering brightness, rendering contrast, shape size, whether to follow in real time, and whether to interact with the pose-adjusted first outline and the second object.
And E2, obtaining the track information of the target contour on the projection surface.
In one embodiment, the trajectory information may include movement trajectory information corresponding to an actual movement trajectory of the first object on the projection surface of the target contour.
In one embodiment, the trajectory information may include information of a movement trajectory of the target contour on the projection plane at a predicted future time after performing a tracking analysis on an actual movement trajectory of the first object.
In one embodiment, the trajectory information may include overall movement trajectory information of the first object or the target contour, and may further include movement trajectory information of a partial portion in the first object or the target contour, for example, in a case where the first object is a teacher, the trajectory information may include movement trajectory information of a hand of the teacher.
For example, the trajectory information may be determined by performing tracking recognition on the historical movement trajectory and the current movement trajectory of the first object and/or the target profile.
And E3, outputting the target data to the projection surface based on the track information.
In one embodiment, the target data may be dynamically output in the projection plane based on the trajectory information.
Fig. 6 is a schematic structural diagram of outputting target data to a projection surface according to an embodiment of the present disclosure. As shown in fig. 6, the target data 601 may include an overall outline of the teacher, where the postures of the teacher's arms and hands may be in dynamic changes; for example, the overall outline of the teacher, which is consistent with the overall movement trajectory of the teacher, can be output in real time in the projection plane 402; for example, in a case where the second object includes a virtual object, the virtual object may be further added to at least one part of the overall outline of the teacher; illustratively, the interaction effect of the overall outline of the teacher and the virtual object, such as the bounce phenomenon caused by the elastic ball touching the body of the teacher during the falling process, can also be rendered in the projection plane 402.
Under the condition that the projection plane 402 is a holographic projection plane and the outline of the teacher in the projection plane 402 is consistent with the actual posture of the teacher, the students can see that the outline of the teacher in the projection plane 402 is superposed with the actual outline of the teacher; for example, the teacher contour rendered in the projection plane 402 can be used for background calculation of holographic somatosensory interaction, so that students can obtain a realistic interactive feeling of the teacher contour and the virtual object in real time.
As can be seen from the above, in the data processing method provided in the embodiment of the present application, the target data can be obtained by configuring and rendering the target contour and the second object, and the target data can be output to the projection plane on the basis of the trajectory information of the target contour on the projection plane. Therefore, the data processing method provided by the embodiment of the application can realize the interactive rendering of the first object on the projection surface by combining the virtual and the real of the target outline based on the first calibration data.
Based on the foregoing embodiment, an embodiment of the present application further provides a data processing apparatus 7, and fig. 7 is a schematic structural diagram of the data processing apparatus 7 provided in the embodiment of the present application, and as shown in fig. 7, the apparatus 7 may include:
a determining module 701, configured to obtain first posture data; wherein the first pose data comprises pose data of a first contour projected on the projection surface; the first contour corresponds to at least a partial contour of a first object in first data acquired by an image acquisition device;
the determining module 701 is further configured to determine a first calibration parameter based on the first attitude data; the first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located.
The processing module 702 is configured to process the first posture data based on the first calibration parameter to obtain second posture data.
In some embodiments, a determination module 701 for determining target pose data; and determining a first calibration parameter based on the corresponding relation between the first attitude data and the target attitude data.
In some embodiments, the determining module 701 is configured to obtain pose adjustment data in response to a pose adjustment operation; determining target attitude data based on the attitude adjustment data; wherein the gesture adjustment operation comprises at least an operation of gesture adjustment for at least one dimension of the first contour.
In some embodiments, a determination module 701 for obtaining pose data of a first object; the target pose data is determined to be pose data of the first object.
In some embodiments, the determining module 701 is configured to determine a second calibration parameter; the second calibration parameter comprises a coordinate conversion parameter between the first coordinate system and the third coordinate system; the third coordinate system comprises a coordinate system taking a specified coordinate point on the projection surface as a coordinate origin;
a determining module 701, configured to process the profile data of the second profile based on the second calibration parameter to obtain profile data of the first profile; the contour data of the second contour comprises image data corresponding to at least part of the contour in the first data;
the processing module 702 is configured to analyze the profile data of the first profile to obtain first posture data.
In some embodiments, the determining module 701 is configured to determine a third calibration parameter between a third coordinate system and a fourth coordinate system; the fourth coordinate system comprises a coordinate system comprising any three coordinate points on the projection surface except the appointed point; the fourth coordinate system takes one coordinate point of any three coordinate points as a coordinate origin; any three coordinate points are not collinear;
a determining module 701, configured to determine the second calibration parameter based on the third calibration parameter.
In some embodiments, the determining module 701 is configured to obtain first coordinate data and second coordinate data of any three coordinate points; the first coordinate data comprise coordinate data of any three coordinate points in a first coordinate system; the second coordinate data comprises coordinate data of any three coordinate points in a fourth coordinate system;
a determining module 701, configured to determine a fourth calibration parameter between the first coordinate data and the second coordinate data;
and a processing module 702, configured to process the fourth calibration parameter based on the third calibration parameter, and determine the second calibration parameter.
In some embodiments, the determining module 701 is configured to perform view transformation on the profile data of the second profile based on the second calibration parameter to obtain view profile data;
a processing module 702, configured to obtain projection parameters of a projection plane; and carrying out orthogonal projection processing on the view profile data based on the projection parameters to obtain profile data of the first profile.
In some embodiments, the processing module 702 is configured to perform configuration rendering on the target contour and the second object, so as to obtain target data; wherein the second object comprises at least a virtual object; the attitude data of the target contour is second attitude data;
a processing module 702, configured to obtain trajectory information of the target contour on the projection plane;
the data processing device also comprises an output module which is used for outputting the target data to the projection surface based on the track information.
Based on the foregoing embodiments, an electronic device is further provided in an embodiment of the present application, and includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor of the electronic device, the data processing method provided in any one of the foregoing embodiments can be implemented.
The processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
The Memory may be a volatile Memory (volatile Memory), such as a Random Access Memory (RAM); or a non-volatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory, a Hard Disk Drive (HDD) or a Solid State Disk (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The determining module 701, the processing module 702 and the output module in the data processing apparatus 7 may be implemented by a processor of an electronic device.
Based on the foregoing embodiments, the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor of an electronic device, the computer program can implement the data processing method according to any of the foregoing embodiments.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The methods disclosed in the method embodiments provided by the present application can be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in various product embodiments provided by the application can be combined arbitrarily to obtain new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided herein may be combined in any combination to arrive at new method or apparatus embodiments without conflict.
The computer-readable storage medium may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); and may be various electronic devices such as mobile phones, computers, tablet devices, personal digital assistants, etc., including one or any combination of the above-mentioned memories.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus necessary general hardware nodes, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A method of data processing, wherein the method comprises:
obtaining first attitude data; wherein the first pose data comprises pose data of a first contour projected on a projection surface; the first contour corresponds to at least a partial contour of a first object in first data acquired by an image acquisition device;
determining a first calibration parameter based on the first attitude data; the first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located;
and processing the first attitude data based on the first calibration parameters to obtain second attitude data.
2. The method of claim 1, wherein said determining a first calibration parameter based on said first attitude data comprises:
determining target attitude data;
and determining the first calibration parameter based on the corresponding relation between the first attitude data and the target attitude data.
3. The method of claim 2, wherein the determining target pose data comprises:
obtaining attitude adjustment data in response to the attitude adjustment operation; wherein the pose adjustment operation comprises at least an operation of pose adjustment for at least one dimension of the first contour;
determining the target pose data based on the pose adjustment data.
4. The method of claim 2, wherein the determining target pose data comprises:
obtaining attitude data of the first object;
determining the target pose data as pose data of the first object.
5. The method of claim 1, wherein the obtaining first pose data comprises:
determining a second calibration parameter; the second calibration parameter comprises a coordinate conversion parameter between the first coordinate system and a third coordinate system; the third coordinate system comprises a coordinate system taking a specified coordinate point on the projection surface as a coordinate origin;
processing the profile data of the second profile based on the second calibration parameters to obtain the profile data of the first profile; wherein the contour data of the second contour comprises image data corresponding to the at least part of the contour in the first data;
and analyzing the profile data of the first profile to obtain the first attitude data.
6. The method of claim 5, wherein said determining a second calibration parameter comprises:
determining a third calibration parameter between the third coordinate system and a fourth coordinate system; wherein the fourth coordinate system includes a coordinate system including any three coordinate points on the projection surface excluding the specified point; the fourth coordinate system takes one of the arbitrary three coordinate points as a coordinate origin; the arbitrary three coordinate points are not collinear;
and determining the second calibration parameter based on the third calibration parameter.
7. A method as claimed in claim 6, wherein said determining the second calibration parameter based on the third calibration parameter comprises:
obtaining first coordinate data and second coordinate data of the arbitrary three coordinate points; wherein the first coordinate data comprises coordinate data of the arbitrary three coordinate points in the first coordinate system; the second coordinate data includes coordinate data of the arbitrary three coordinate points in the fourth coordinate system;
determining a fourth calibration parameter between the first coordinate data and the second coordinate data;
and processing the fourth calibration parameter based on the third calibration parameter to determine the second calibration parameter.
8. The method of claim 5, wherein the processing the profile data of the second profile based on the second calibration parameters to obtain the profile data of the first profile comprises:
view transformation is carried out on the contour data of the second contour based on the second calibration parameters to obtain view contour data;
acquiring projection parameters of the projection surface;
and carrying out orthogonal projection processing on the view profile data based on the projection parameters to obtain the profile data of the first profile.
9. The method of claim 1, wherein the method further comprises:
configuring and rendering the target contour and the second object to obtain target data; wherein the second object comprises at least a virtual object; the attitude data of the target profile is the second attitude data;
obtaining the track information of the target contour on the projection surface;
and outputting the target data to the projection surface based on the track information.
10. A data processing apparatus, wherein the apparatus comprises:
the determining module is used for obtaining first attitude data; wherein the first pose data comprises pose data of a first contour projected on a projection surface; the first contour corresponds to at least a partial contour of a first object in first data acquired by an image acquisition device;
the determining module is further configured to determine a first calibration parameter based on the first attitude data; the first calibration parameters comprise coordinate conversion parameters between a first coordinate system where the image acquisition device is located and a second coordinate system where the projection surface is located;
and the processing module is used for processing the first attitude data based on the first calibration parameter to obtain second attitude data.
CN202210365226.6A 2022-04-07 2022-04-07 Data processing method and device Pending CN114882107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210365226.6A CN114882107A (en) 2022-04-07 2022-04-07 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210365226.6A CN114882107A (en) 2022-04-07 2022-04-07 Data processing method and device

Publications (1)

Publication Number Publication Date
CN114882107A true CN114882107A (en) 2022-08-09

Family

ID=82670203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210365226.6A Pending CN114882107A (en) 2022-04-07 2022-04-07 Data processing method and device

Country Status (1)

Country Link
CN (1) CN114882107A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278735A (en) * 2023-09-15 2023-12-22 山东锦霖智能科技集团有限公司 Immersive image projection equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278735A (en) * 2023-09-15 2023-12-22 山东锦霖智能科技集团有限公司 Immersive image projection equipment
CN117278735B (en) * 2023-09-15 2024-05-17 山东锦霖智能科技集团有限公司 Immersive image projection equipment

Similar Documents

Publication Publication Date Title
US9519968B2 (en) Calibrating visual sensors using homography operators
US10732725B2 (en) Method and apparatus of interactive display based on gesture recognition
CN111328396B (en) Pose estimation and model retrieval for objects in images
CN108764024B (en) Device and method for generating face recognition model and computer readable storage medium
CN110232311B (en) Method and device for segmenting hand image and computer equipment
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN113238650B (en) Gesture recognition and control method and device and virtual reality equipment
CN110675487B (en) Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face
EP2903256B1 (en) Image processing device, image processing method and program
US9805509B2 (en) Method and system for constructing a virtual image anchored onto a real-world object
CN109934065B (en) Method and device for gesture recognition
CN108292362A (en) Gesture identification for cursor control
US10373018B2 (en) Method of determining a similarity transformation between first and second coordinates of 3D features
KR20180107085A (en) How to influence virtual objects in augmented reality
CN114138121B (en) User gesture recognition method, device and system, storage medium and computing equipment
CN112115799A (en) Three-dimensional gesture recognition method, device and equipment based on mark points
US20240071016A1 (en) Mixed reality system, program, mobile terminal device, and method
CN113934297B (en) Interaction method and device based on augmented reality, electronic equipment and medium
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN114882107A (en) Data processing method and device
KR102063408B1 (en) Method and apparatus for interaction with virtual objects
CN110910478B (en) GIF map generation method and device, electronic equipment and storage medium
CN114638921B (en) Motion capture method, terminal device, and storage medium
Diaz et al. Multimodal sensing interface for haptic interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination