Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the embodiment of the present application, the main execution body of the flow is a terminal device. The terminal devices include but are not limited to: the positioning method comprises the following steps of enabling the server, the computer, the smart phone, the tablet computer and the like to execute the positioning method. Preferably, the terminal device is a robot device, and the terminal device can acquire the positioning image through a visual sensor or the like. Fig. 1 shows a flowchart of an implementation of the positioning method provided in the first embodiment of the present application, which is detailed as follows:
in S101, a plurality of positioning images including a highlight target are acquired.
In this embodiment, the obtaining of the plurality of positioning images including the highlight object may specifically be: the positioning images containing the highlight targets are acquired through the vision sensor installed on the robot equipment, exemplarily, the vision sensor can acquire images pointing to any space in an environment room where the robot is located, and specifically, the vision sensor can be an omnibearing camera.
In S102, the coordinate information of the highlight object in each of the positioning images is identified.
In the present embodiment, the coordinate information refers to coordinates of the highlight object in the positioning image, for example, coordinates of a lower left corner are (0,0), and coordinates of an upper right corner are (40,10), that is, the positioning image is divided into 40 units of coordinates in the horizontal direction, and 10 units of coordinates in the vertical direction.
In a possible implementation manner, the identifying the coordinate information of the highlight object respectively corresponding to each of the positioning images may specifically be: and performing gray scale transformation on the positioning image to obtain a gray scale image, determining the area of the highlight object according to the gray scale value of each pixel in the gray scale image, specifically, identifying the pixel with the gray scale value higher than a preset gray scale threshold value as a highlight pixel, identifying the area formed by continuous adjacent highlight pixels as the area of the highlight object, and identifying the coordinates of a preset point of the area of the highlight object as the coordinate information of the highlight object in the positioning image. The preset point can be the geometric center, the gravity center or the gray scale mass center of the area where the highlight target is located; the preset point of the area where the highlight target is located may be calculated by a clustering algorithm, and the clustering algorithm in the prior art may be referred to, which is not described herein again.
In S103, positioning information of the robot is determined based on a plurality of pieces of the coordinate information.
In the present embodiment, the positioning information includes a spatial horizontal position and an attitude rotation angle of the robot. Each positioning image is acquired by the robot at each position, and generally, the position of the highlight object in the space is not changed, so that the coordinate information in each positioning image reflects the spatial horizontal position and the attitude rotation angle (i.e., the facing direction of the vision sensor) of the robot at each position.
In a possible implementation manner, the determining the positioning information of the robot based on the plurality of pieces of coordinate information may specifically be: presetting a standard positioning image, wherein the standard positioning image corresponds to standard positioning information of the robot and standard coordinate information of each highlight target; and performing coordinate transformation on the coordinate information of the same highlight target and the standard coordinate information through a coordinate transformation method, so as to determine coordinate system transformation parameters of the standard positioning image and the positioning image, wherein the coordinate system transformation parameters represent the difference between the positioning information of the robot when acquiring the positioning image and the standard positioning information, and the positioning information can be determined according to the coordinate system transformation parameters and the standard positioning information.
In another possible implementation manner, the determining the positioning information of the robot based on the coordinate information may specifically be: presetting a positioning model, inputting the coordinate information into the positioning model, and outputting the positioning information of the robot; the positioning model is determined according to a plurality of training positioning images, and the training positioning images comprise training coordinate information and corresponding training positioning information; namely, the training coordinate information is used as input, the training positioning information is used as output, and the positioning model is obtained through training, wherein the positioning model can be a deep learning model.
It should be understood that the positioning method implemented in the present embodiment is continuous, that is, the indoor environment of the robot is constantly monitored by the above-mentioned vision sensor, and the positioning images are continuously acquired at preset time intervals, for example, at 30 frames per second. The shooting angle of the vision sensor is fixed, and the vision sensor can be arranged at the top of the robot or at the side of the robot to form a fixed inclined angle with the robot.
In the embodiment, the robot can be positioned based on the indoor highlight target, the indoor highlight target is generally fixed relative to an indoor scene, and the problem that the position of the indoor robot cannot be accurately positioned due to movement of an obstacle in the prior art is solved.
Fig. 2 shows a flowchart of an implementation of the positioning method according to the second embodiment of the present application. Referring to fig. 2, in comparison with the embodiment shown in fig. 1, the positioning method S102 provided in this embodiment includes S201 to S203, which are detailed as follows:
further, the identifying the corresponding coordinate information of the highlight object in each of the positioning images includes:
in S201, a binarization process is performed on the positioning image to obtain a binarized image.
In this embodiment, in order to more accurately identify the highlight object in the positioning image, based on a digital image processing technique, the positioning image is subjected to binarization processing to obtain the binarized image.
In S202, a highlight region in the binarized image, in which the pixel value is greater than a preset pixel threshold value, is selected, and the highlight region is used as a target region of the highlight target.
In this embodiment, the pixel threshold may be preset, or may be adjusted according to the environment where the robot is located. Selecting a highlight area with a pixel value larger than a preset pixel threshold value in the binarized image, and taking the highlight area as a target area of the highlight target, specifically: and identifying pixels with pixel values higher than a preset pixel threshold value as highlight pixels, and identifying highlight areas formed by continuous adjacent highlight pixels as target areas of the highlight targets.
In S203, the coordinates of the preset point of the target region in the positioning image are identified as the coordinate information.
In this embodiment, the preset point may be a geometric center, a center of gravity, or a pixel center of mass of the region where the highlight target is located. In a possible implementation manner, the identifying, as the coordinate information, the coordinate of the central point of the target area in the positioning image may specifically be: and determining the pixel centroid of the target area through a clustering algorithm, determining the coordinate of the pixel centroid in the positioning image, and identifying the coordinate as the coordinate information. Specifically, reference may be made to a clustering algorithm in the prior art, which is not described herein again.
In this embodiment, the positioning image is processed by the binary digital image, so that the highlight target in the positioning image can be accurately identified, and the accuracy of the coordinate information is improved.
Further, the positioning method S202 provided in this embodiment further includes S2021 to S2022, which are described in detail as follows:
the selecting a highlight area with a pixel value larger than a preset pixel threshold value in the binarized image, and taking the highlight area as a target area of the highlight target comprises the following steps:
in S2021, if the highlight region has axial symmetry, identifying the highlight region as a target region of the highlight target;
in this embodiment, in order to avoid some temporarily appearing highlighted targets from interfering with the positioning method of this embodiment, a certain limitation needs to be performed on the highlighted targets. The temporary high-brightness object which may cause interference may be some electronic devices (especially mobile devices), and the high-brightness object in the positioning method of the embodiment should be some high-brightness objects which do not move in space, such as a lamp tube and a bulb. Therefore, in this embodiment, it is defined that the highlight region for the subsequent step has axial symmetry, that is, if the highlight region has axial symmetry, the highlight region is identified as the target region of the highlight target.
In S2022, if the highlight region does not have axial symmetry, the highlight region is identified as an undetermined region.
In this embodiment, similarly, if the highlight region does not have axial symmetry, it is considered that the highlight region may be a region where a highlight target that may cause interference temporarily appears, and therefore such highlight region is not used in the subsequent step, that is, if the highlight region does not have axial symmetry, the highlight region is identified as an undetermined region.
In this embodiment, the highlighted target is defined with axial symmetry, so as to avoid interference that some temporarily appearing highlighted targets may cause to subsequently determine the positioning information of the robot, and improve the accuracy of subsequently positioning the robot according to the highlighted area.
Fig. 3 shows a flowchart of an implementation of the positioning method according to the third embodiment of the present application. Referring to fig. 3, with respect to the embodiment shown in fig. 1, before S103, the positioning method provided in this embodiment further includes S301 to S304, which are detailed as follows:
in this embodiment, the positioning image includes at least two highlighted objects.
Further, before determining the positioning information of the robot based on the plurality of pieces of coordinate information, the method further includes:
in S301, the positioning images are sorted based on the order of the acquisition time, and the first N positioning images are selected from all the positioning images as comparison images.
In this embodiment, N is a positive integer greater than 1. The positioning method is continuous, namely positioning images are continuously acquired according to a preset time interval, namely the positioning images are sequenced according to the acquisition time when being acquired. In order to enable subsequent determination of the positioning information of the robot, a reference is required. I.e. the reference image and the reference positioning information of the robot at the moment of acquisition of the reference image are needed. The former N positioning images are selected to determine the reference as early as possible so as to realize the subsequent determination of the positioning information of the robot.
In S302, contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image is acquired, and a positioning coordinate system is established based on first contrast positioning information corresponding to a first contrast image.
In this embodiment, the first comparison image is the subject image whose acquisition time is the earliest in the subject images.
In a possible implementation manner, the obtaining of the contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image may specifically be: the comparison image is obtained when the robot finishes the action path by presetting the action path of the robot, so as to determine the comparison positioning information corresponding to the comparison image. And instructing the robot to complete the action path, specifically, monitoring whether the robot moves according to the action path through a component such as a speedometer, a gyroscope, a laser radar, a distance sensor, an optical sensor or a visual sensor.
In a possible implementation manner, the establishing a positioning coordinate system based on the first contrast positioning information corresponding to the first contrast image may specifically be: taking the first contrast positioning information corresponding to the first contrast image as the center of the positioning coordinate system, where the positioning coordinate system includes the spatial horizontal coordinate and the attitude rotation angle of the robot, that is, the spatial horizontal coordinate where the robot is located at the time of acquiring the first contrast image is (0,0), and the attitude rotation angle of the robot is 0, that is, the positioning information of the robot is (0,0, 0) at this time.
It should be understood that the spatial coordinates in the subsequent steps are referenced to the positioning coordinate system.
In S303, the contrast coordinates of the highlight object in each of the contrast images are determined.
In this embodiment, the above determining the contrast coordinates of the highlight object in each contrast image may specifically refer to the above description of S102, and is not repeated herein.
In S304, based on the contrast positioning information and the contrast coordinates, spatial position information of the highlight target is calculated.
In this embodiment, the positioning image includes a first highlight object and a second highlight object. The calculating the spatial position information of the highlight target in the positioning coordinate system based on the comparison positioning information and the comparison coordinates may specifically be: taking a first contrast image and a second contrast image as an example, determining a first external reference matrix based on the first contrast positioning information corresponding to the first contrast image, and determining a second external reference matrix based on the second contrast positioning information corresponding to the second contrast image; acquiring internal parameters of the acquisition equipment of the comparison image, and generating an internal parameter matrix; and determining the coordinates of the first highlighted target and the second highlighted target in the first contrast image and the second contrast image respectively, and constructing a contrast space equation to solve the space three-dimensional coordinates of the first highlighted target and the second highlighted target. The comparison space equation is specifically as follows:
{a1=K*M1*A;a2=K*M2*A
{b1=K*M1*B;b2=K*M2*B
wherein, a1The contrast coordinates of the first highlight target in the first contrast image; b1For the contrasting of the second highlighted object in the first contrast imageMarking; a is2The contrast coordinates of the first highlight target in the second contrast image; b2The contrast coordinates of the second highlight target in the second contrast image are obtained; k is an internal reference matrix; m1Is a first external parameter matrix; m2Is a second appearance parameter matrix; a is space position information of the first highlight target, namely a space three-dimensional coordinate; b is spatial position information of the second highlight object.
It is to be understood that the horizontal coordinate in the spatial three-dimensional coordinate system is based on the above-mentioned positioning coordinate system, and the height in the spatial three-dimensional coordinate system is also based on the unit coordinate of the positioning coordinate system.
In this embodiment, spatial position information of at least two highlighted targets is determined for subsequent determination of positioning information of the robot.
Fig. 4 shows a flowchart of an implementation of the positioning method according to the fourth embodiment of the present application. Referring to fig. 4, with respect to the embodiment described in fig. 3, the positioning method S103 provided in this embodiment includes S1031 to S1032, which are detailed as follows:
further, the determining the positioning information of the robot based on the plurality of coordinate information comprises:
in S1031, a target spatial equation is constructed based on the coordinate information and the spatial position information.
In this embodiment, the target space equation is as follows:
{at=K*Mt*A;
{bt=K*Mt*B;
wherein, atCoordinate information of a first highlight target in the positioning image; btTarget coordinates of a second highlight target in the positioning image; k is an internal parameter matrix constructed based on preset internal parameters; mtThe external parameter matrix of the robot at the acquisition time of the positioning image is obtained; a is space position information of the first highlight target in the positioning coordinate system, namely space coordinates; and B is the spatial position information of the second highlight object in the positioning coordinate system.
In S1032, the objective space equation is solved to obtain the external reference matrix, and the positioning information of the robot in the positioning coordinate system is determined based on the external reference matrix.
In this embodiment, the external reference matrix represents the spatial horizontal coordinate and the attitude rotation angle (based on the positioning coordinate system).
In this embodiment, the positioning information of the robot can be determined according to the mathematical relationship by at least two highlighted targets.
Fig. 5 shows a flowchart of an implementation of the positioning method provided in the fifth embodiment of the present application. Referring to fig. 5, with respect to the embodiment shown in fig. 1, the positioning method S103 provided in this embodiment includes S501 to S503, which are detailed as follows:
in this embodiment, the positioning image only includes one highlighted target, and the positioning information includes a spatial horizontal coordinate and an attitude rotation angle.
Further, the determining the positioning information of the robot based on the plurality of coordinate information comprises:
in S501, the attitude rotation angle is determined based on the assist sensor.
In this embodiment, the auxiliary sensor may be a gyroscope to record the rotation of the robot, that is, to determine a rotation variation value between an initial time when the robot acquires the positioning image and a time when the robot determines the positioning information, that is, the attitude rotation angle.
In S502, spatial position information of the highlight target is determined.
In this embodiment, the above-mentioned determining the spatial position information of the highlight object may refer to the above-mentioned related description of S304, and is not described herein again. It should be noted that in this embodiment, only the spatial position of one highlight object needs to be determined.
In S503, the spatial horizontal coordinates of the robot are calculated based on the coordinate information and the spatial position information of the highlight target.
In this embodiment, since the attitude rotation angle does not need to be calculated, the spatial horizontal coordinate can be calculated by constructing only the equation of the spatial horizontal coordinate. For specific calculation, reference may be made to the description of S1031, which is not repeated herein, and it should be noted that, in this embodiment, only one highlight target is provided, and the external parameters of this embodiment do not include the attitude rotation angle, so that one unknown parameter is reduced, and only one equation is needed to solve the spatial horizontal coordinate of the robot, instead of an equation set formed by two equations.
In the embodiment, the attitude rotation angle of the robot is determined according to the auxiliary sensor so as to reduce the calculation amount in the subsequent determination of the positioning information of the robot, and the positioning method capable of determining the positioning information of the robot by only one highlight target is provided.
Fig. 6 shows a flowchart of an implementation of a positioning method according to a sixth embodiment of the present application. Referring to fig. 6, in comparison with the embodiment shown in fig. 1, the positioning method provided in this embodiment further includes S601 to S604, which are detailed as follows:
further, the positioning method further includes:
in S601, any two of the positioning images that are continuously acquired are selected and identified as a first test image and a second test image.
In this embodiment, the selecting any two of the positioning images obtained continuously to identify as the first test image and the second test image may specifically be: and sequencing the positioning images based on the sequence of the acquisition time, and selecting any two adjacent positioning images as the first test image and the second test image. It should be understood that the positioning method is continuous, that is, the positioning images are continuously acquired at preset time intervals, that is, the positioning images are sequenced according to the acquisition time when being acquired
In S602, a first test image coordinate in the first test image and a second test image coordinate in the second test image of the highlighted target are determined.
In this embodiment, the above determining the first test image coordinate of the highlight object in the first test image and the second test image coordinate in the second test image may specifically refer to the above description of S102, and details are not repeated here.
In S603, the first test image coordinates and the second test image coordinates of each highlight target are compared to obtain a test difference value.
In this embodiment, the test difference value is used to indicate the position change of the highlight object in the first test image and the second test image, so as to further determine whether the position of the highlight object in the space has changed.
In a possible implementation manner, the comparing the first test image coordinate and the second test image coordinate of each highlight target to obtain a test difference value may specifically be: and calculating the difference value of the first test image coordinate and the second test image coordinate in each dimension, and identifying the sum of all the difference values as the test difference value.
In S604, if the test difference value is greater than a preset difference threshold, an error report or a prompt message is generated, and the determination of the positioning information of the robot based on the coordinate information is stopped or the robot is switched to another positioning mode.
In this embodiment, the difference threshold may be specifically determined according to the preset time interval. If the test difference value is greater than the difference threshold value, it indicates that the highlighted target has a position change in space (or the highlighted target disappears due to the light being turned off), at this time, the robot should not be positioned continuously, error reporting information or reminding information needs to be generated to inform a user, and the ongoing step of determining the positioning information of the robot based on the coordinate information is stopped, so that the robot restarts to execute the positioning method provided by the embodiment. In particular, if the highlighted target cannot be continuously identified in the acquired positioning image, the robot cannot continuously execute the positioning method provided by the embodiment, and at this time, the robot needs to be switched to another positioning mode.
In this embodiment, the positioning method is continuous, that is, the positioning images are acquired at preset time intervals, and in the process of continuously acquiring the positioning images, the highlight targets can be tracked in real time by performing the test on the first test image and the second test image, so that different highlight targets can be distinguished in each acquired positioning image.
Fig. 7 shows a schematic structural diagram of a positioning device provided in an embodiment of the present application, corresponding to the method described in the above embodiment, and only shows a part related to the embodiment of the present application for convenience of description.
Referring to fig. 7, the positioning apparatus includes: the positioning image acquisition module is used for acquiring a plurality of positioning images containing highlight targets; the highlight target identification module is used for identifying the corresponding coordinate information of the highlight target in each positioning image; and the positioning information determining module is used for determining the positioning information of the robot based on the coordinate information.
Optionally, the highlighted target identification module includes: the image processing module is used for carrying out binarization processing on the positioning image to obtain a binarization image; a target area determining module, configured to select a highlight area in the binarized image, where a pixel value of the highlight area is greater than a preset pixel threshold, and use the highlight area as a target area of the highlight target; and the coordinate information determining module is used for identifying the coordinates of the central point of the target area in the positioning image as the coordinate information.
Optionally, the target area determining module is further configured to, if the highlight area has axial symmetry, identify the highlight area as a target area of the highlight target; if the highlight area does not have axial symmetry, identifying the highlight area as an undetermined area.
Optionally, the positioning image includes at least two highlighted targets; the positioning device further comprises: the comparison image determining module is used for sequencing the positioning images based on the sequence of the acquisition time and selecting the first N positioning images from all the positioning images as comparison images; n is a positive integer greater than 1; the positioning coordinate system establishing module is used for acquiring contrast positioning information corresponding to the robot at the acquisition time corresponding to each contrast image and establishing a positioning coordinate system based on first contrast positioning information corresponding to a first contrast image; the first contrast image is the object image with the earliest acquisition time in the object images; the contrast coordinate determination module is used for determining contrast coordinates of the highlight target in each contrast image; and the spatial position information calculation module is used for calculating the spatial position information of the highlight target in the positioning coordinate system based on the comparison positioning information and the comparison coordinates.
Optionally, the positioning information determining module includes: a target space equation constructing module, configured to construct a target space equation based on the coordinate information and the spatial position information, where the target space equation is as follows: { at=K*Mt*A;{bt=K*MtB; wherein, atCoordinate information of a first highlight target in the positioning image; btTarget coordinates of a second highlight target in the positioning image; k is an internal parameter matrix constructed based on preset internal parameters; mtThe external parameter matrix of the robot at the acquisition time of the positioning image is obtained; a is space position information of the first highlight target in the positioning coordinate system, namely space coordinates; b is the spatial position information of the second highlight target in the positioning coordinate system; and the external parameter matrix solving module is used for solving the target space equation to obtain the external parameter matrix and determining the positioning information of the robot in the positioning coordinate system based on the external parameter matrix.
Optionally, the positioning image only includes one highlighted target, and the positioning information includes a spatial horizontal coordinate and an attitude rotation angle; the positioning information determination module further comprises: an attitude rotation angle determination module for determining the attitude rotation angle based on an auxiliary sensor; the spatial position information determining module is used for determining the spatial position information of the highlight target; and the space horizontal coordinate calculation module is used for calculating the space horizontal coordinate of the robot based on the coordinate information and the space position information of the highlight target.
Optionally, the positioning device further includes: the test image selecting module is used for selecting any two continuously acquired positioning images and identifying the positioning images into a first test image and a second test image;
a test image coordinate determination module for determining first test image coordinates of the highlight target in the first test image and second test image coordinates in the second test image;
the test difference value comparison module is used for comparing the first test image coordinate and the second test image coordinate of each highlight target to obtain a test difference value; and if the test difference value is larger than a preset difference threshold value, generating error reporting information and stopping determining the positioning information of the robot based on the coordinate information.
It should be noted that, for the information interaction, the execution process, and other contents between the above-mentioned apparatuses, the specific functions and the technical effects of the embodiments of the method of the present application are based on the same concept, and specific reference may be made to the section of the embodiments of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 8 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, the terminal device 8 of this embodiment includes: at least one processor 80 (only one processor is shown in fig. 8), a memory 81, and a computer program 82 stored in the memory 81 and executable on the at least one processor 80, the processor 80 implementing the steps in any of the various method embodiments described above when executing the computer program 82.
The terminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices, and establishes a communication connection with the robot so that the robot can implement the positioning method provided in this embodiment; or may be the robotic device itself. The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of the terminal device 8, and does not constitute a limitation of the terminal device 8, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 80 may be a Central Processing Unit (CPU), and the Processor 80 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may in some embodiments be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. In other embodiments, the memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 81 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.