CN110602378A - Processing method, device and equipment for images shot by camera - Google Patents

Processing method, device and equipment for images shot by camera Download PDF

Info

Publication number
CN110602378A
CN110602378A CN201910738356.8A CN201910738356A CN110602378A CN 110602378 A CN110602378 A CN 110602378A CN 201910738356 A CN201910738356 A CN 201910738356A CN 110602378 A CN110602378 A CN 110602378A
Authority
CN
China
Prior art keywords
camera
image
abstract
original image
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910738356.8A
Other languages
Chinese (zh)
Other versions
CN110602378B (en
Inventor
赵鹏飞
刘新建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202110400207.8A priority Critical patent/CN113114941B/en
Priority to CN201910738356.8A priority patent/CN110602378B/en
Publication of CN110602378A publication Critical patent/CN110602378A/en
Application granted granted Critical
Publication of CN110602378B publication Critical patent/CN110602378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the specification provides a processing method, a processing device and processing equipment for images shot by a camera, wherein the method comprises the following steps: and when an original image shot by the entity camera which is accessed currently is obtained, a target abstract camera corresponding to the original image is obtained in the abstract layer, and the original image is converted into a standard image corresponding to the target abstract camera.

Description

Processing method, device and equipment for images shot by camera
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method, an apparatus, and a device for processing an image captured by a camera.
Background
The camera applied to the Internet of Things (IoT) public equipment can shoot various types of images, such as two-dimensional images, three-dimensional depth images, Infrared (IR) images, and the like. Generally, different merchants integrate cameras which are produced by different models or different manufacturers and can shoot various types of images, however, because the cameras have non-standard properties, the cameras of different models and the cameras produced by different manufacturers often need to be driven by different driving programs and are matched with different application programs; therefore, not only is the upgrading and maintenance efficiency of the camera low, but also the original driving program or application program cannot be used continuously due to the upgrading or replacement of the version of the camera, and the compatibility is poor.
Disclosure of Invention
One or more embodiments of the present disclosure provide a method, an apparatus, and a device for processing images captured by a camera, so as to achieve unification of different types of entity cameras, improve upgrading maintenance efficiency of the entity cameras, reduce complexity of replacement operations of the entity cameras, enable an application program to directly use corresponding images without sensing types of the entity cameras, and improve compatibility.
To solve the above technical problem, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present specification provide a processing method for an image captured by a camera, including:
acquiring an original image shot by a currently accessed entity camera;
acquiring a target abstract camera corresponding to the original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by different types of entity cameras;
and converting the original image into a standard image corresponding to the target abstract camera.
One or more embodiments of the present specification provide a processing apparatus for capturing an image by a camera, including:
the first acquisition module is used for acquiring an original image shot by a currently accessed entity camera;
the second acquisition module is used for acquiring a target abstract camera corresponding to the original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by a plurality of entity cameras;
and the conversion module is used for converting the original image into a standard image corresponding to the target abstract camera.
One or more embodiments of the present specification provide a processing apparatus for taking an image by a camera, including: a processor; and a memory arranged to store computer executable instructions which, when executed, cause the processor to carry out the steps of the above-described method of processing images taken by a camera.
One or more embodiments of the present specification provide a storage medium for storing computer-executable instructions, which when executed, implement the steps of the processing method for capturing images by a camera described above.
According to one embodiment of the specification, the unification of different types of entity cameras is realized, and for the upgrading and maintenance of the entity cameras, only the abstract layer is maintained without maintaining a plurality of different driving programs, so that the upgrading and maintenance efficiency is improved; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 is a schematic diagram of upgrading and replacing a physical camera in the prior art;
fig. 2 is a first flowchart of a processing method for capturing an image by a camera according to one or more embodiments of the present disclosure;
fig. 3 is a second flowchart of a processing method for capturing an image by a camera according to one or more embodiments of the present disclosure;
FIG. 4 is a detailed diagram of step S104 provided in one or more embodiments of the present description;
FIG. 5 is a detailed diagram of step S106 provided in one or more embodiments of the present description;
fig. 6 is a third flowchart of a processing method for capturing an image by a camera according to one or more embodiments of the present disclosure;
FIG. 7 is a schematic diagram of a physical camera upgrade and replacement provided by one or more embodiments of the present disclosure;
fig. 8 is a schematic block diagram illustrating a processing apparatus for capturing an image by a camera according to one or more embodiments of the present disclosure;
fig. 9 is a schematic structural diagram of a processing device for taking an image by a camera according to one or more embodiments of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
In order to better understand the problems in the prior art and better embody the technical effects of the processing method, the processing device, and the processing apparatus for images captured by a camera provided in the embodiments of the present specification, the prior art is further described below. Taking a certain supermarket as an example, refer to fig. 1, wherein fig. 1 is a schematic diagram of upgrading and replacing a camera in the prior art; the IoT settlement equipment 1 at the cashier position of the large supermarket No. 1 is provided with a camera A purchased from a manufacturer 1, a driving program 1 of the camera A and an application program 1 matched with the camera A for use; the IoT settlement equipment 2 at the cashier position of the supermarket No. 2 is provided with a camera B purchased from a manufacturer 2, a driving program 2 of the camera B and an application program 2 matched with the camera B for use; the IoT settlement equipment 3 at the cashier position of the large supermarket No. 3 is provided with a camera C purchased from a manufacturer 3, a driving program 3 of the camera C and an application program 3 matched with the camera C for use; the camera A can shoot two-dimensional images and three-dimensional images, the camera B can shoot two-dimensional images and infrared images, and the camera C can shoot two-dimensional images, three-dimensional images and infrared images, namely cameras of three different types. When the three different types of cameras need to be upgraded, the three cameras use different driving programs, and the three driving programs need to be upgraded respectively to obtain the driving program 11, the driving program 22 and the driving program 33, so that the upgrading efficiency is low; furthermore, when it is necessary to replace the camera a with the camera C, it is necessary to simultaneously replace the driver 1 of the camera a installed in the IoT settlement device in the cashier lot No. 1 with the driver 3 of the camera C and replace the application 1 used in cooperation with the camera a with the application 3 used in cooperation with the camera C, which is troublesome in replacement operation and poor in compatibility.
Based on this, one or more embodiments of the present specification provide a method, an apparatus, and a device for processing images captured by a camera, where an abstraction layer technology is adopted, image types that can be captured by different types of physical cameras are divided in advance and correspond to the divided image types, and a plurality of abstraction cameras are provided in the abstraction layer, so that when an original image captured by a currently accessed physical camera is acquired, a target abstraction camera corresponding to the original image is acquired in the abstraction layer, and the original image is converted into a standard image corresponding to the target abstraction camera and is provided for an application program to use. The method realizes the unification of the entity cameras of different types, and for the upgrading maintenance of the entity cameras, only the abstract layer is maintained without maintaining a plurality of different driving programs, so that the upgrading maintenance efficiency is improved; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
Fig. 2 is a flowchart illustrating a processing method for capturing an image by a camera according to one or more embodiments of the present disclosure, where the method in fig. 2 may be performed by an image processing apparatus, as shown in fig. 2, and the method includes the following steps:
step S102, acquiring an original image shot by a currently accessed entity camera;
the entity camera can shoot at least one image of a two-dimensional image, a three-dimensional image, an infrared image and an iris image; the two-dimensional image is an RGB image, a YUV image, an MJPEG image, and the three-dimensional image is a depth image.
Step S104, acquiring a target abstract camera corresponding to an original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by different types of entity cameras;
the abstraction layer, such as a hardware abstraction layer, abstracts the interface of the hardware platform, hides the interface details of the hardware platform, provides an abstraction platform interface, and has hardware independence. In one or more embodiments of the present description, image types that can be captured by a plurality of different types of entity cameras are divided in advance, and correspond to the divided image types, and a plurality of abstract cameras are provided in an abstract layer, where an abstract camera includes at least one of a two-dimensional abstract camera, a three-dimensional abstract camera, an infrared abstract camera, and an iris abstract camera, so as to achieve independence from the entity cameras, that is, achieve a uniform purpose of the different types of entity cameras.
And step S106, converting the original image into a standard image corresponding to the target abstract camera.
In one or more embodiments of the present description, an abstraction layer technology is adopted, and a plurality of abstract cameras are provided in an abstraction layer, where the abstract cameras correspond to types of images that can be shot by different types of entity cameras, so that unification of the different types of entity cameras is achieved, and for upgrading and maintenance of cameras, only the abstraction layer is maintained, and there is no need to maintain a plurality of different drivers, so that upgrading and maintenance efficiency is improved; for the replacement of the camera, a driver and an application program do not need to be replaced, the complexity of operation is reduced, and the compatibility is improved; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the transaction processing speed based on the image is improved.
In one or more embodiments of the present disclosure, a plurality of transmission channels are preset corresponding to types of images that can be captured by the different types of physical cameras, and a corresponding relationship is established between a channel identifier of the transmission channel and an image type, so as to transmit an image of one image type through each transmission channel. In consideration of the fact that different types of images that can be captured by different types of physical cameras are different, that is, the corresponding transmission channels are different, in one or more embodiments of the present specification, as shown in fig. 3, step S102 further includes:
step S100: determining a transmission channel corresponding to a currently accessed entity camera;
specifically, the corresponding image type information is respectively sent to the currently accessed entity camera through the transmission channels corresponding to the image types that the entity cameras of the different types can shoot, so that the currently accessed entity camera determines the transmission channel corresponding to the currently accessed entity camera according to the image type information; and if the image type information is successfully sent, determining the corresponding transmission channel as the transmission channel corresponding to the currently accessed entity camera.
In one or more embodiments of the present specification, after step S100, the method further includes: and recording the channel identification of the determined transmission channel, selecting a target transmission channel in the corresponding transmission channel according to the recorded channel identification in the following, and acquiring the original image shot by the currently accessed entity camera through the target transmission channel.
As an example, the types of images that can be captured by the different types of entity cameras include a two-dimensional image, a three-dimensional image, an infrared image, and an iris image, and for convenience of distinction, a transmission channel corresponding to the two-dimensional image is denoted as a first transmission channel, a transmission channel corresponding to the three-dimensional image is denoted as a second transmission channel, a transmission channel corresponding to the infrared image is denoted as a third transmission channel, and a transmission channel corresponding to the iris image is denoted as a fourth transmission channel, where a channel identifier of the first transmission channel is 001, a channel identifier of the second transmission channel is 002, a channel identifier of the third transmission channel is 003, and a channel identifier of the fourth transmission channel is 004; respectively sending two-dimensional image information to the currently accessed entity camera through the first transmission channel, sending three-dimensional image information to the currently accessed entity camera through the second transmission channel, sending infrared image information to the currently accessed entity camera through the third transmission channel, sending iris image information to the currently accessed entity camera through the fourth transmission channel, and receiving the successful sending messages of the two-dimensional image information, the three-dimensional image information and the infrared image information and the failed sending messages of the iris image information, wherein the first transmission channel, the second transmission channel and the third transmission channel are used as the transmission channels corresponding to the currently accessed entity camera, and channel identifications 001, 002 and 003 are recorded, namely the currently accessed entity camera can shoot two-dimensional images, three-dimensional images and infrared images.
Further, when the currently accessed entity camera receives the image type message, the currently accessed entity camera marks the corresponding transmission channel according to the image type message so as to distinguish the transmission channels, and transmits the original image through the corresponding transmission channel according to the image type of the shot original image. For example, the aforementioned first transmission channel is labeled 01, which represents that it is used for transmitting two-dimensional images; the second transmission channel is marked as 02 and is used for transmitting the three-dimensional image; the aforementioned third transmission channel is labeled 03 and is shown for transmitting infrared images.
Furthermore, if the sending of the image type information fails, it is determined that the corresponding transmission channel is not the transmission channel corresponding to the currently accessed entity camera, that is, the currently accessed entity camera cannot shoot the image of the corresponding image type.
Therefore, the transmission channel corresponding to the currently accessed entity camera is determined in the preset plurality of transmission channels, so that data interaction is carried out with the currently accessed entity camera through the transmission channel. That is, as shown in fig. 3, step S102 includes: acquiring an original image shot by a currently accessed entity camera through a determined transmission channel;
in practical application, the currently accessed entity camera may issue the original image after shooting the original image, and correspondingly, step S102 specifically includes:
step S202, receiving an original image sent by a currently accessed entity camera through a corresponding transmission channel;
in one or more embodiments of the present description, after an original image is captured, a currently accessed entity camera may further store the original image into a preset storage area, so that an image processing apparatus obtains the original image in the preset storage area; correspondingly, step S102 specifically includes:
step S302, selecting a target transmission channel from the transmission channels corresponding to the currently accessed entity camera;
specifically, the image type which can be shot by the entity camera accessed currently is determined according to the recorded channel identification and the corresponding relation between the channel identification and the image type; and judging whether the image types which can be shot by the entity camera which is accessed currently comprise the target image type which needs to be obtained currently, if so, taking the transmission channel corresponding to the channel identification corresponding to the target image type as the target transmission channel. For example, according to the recorded channel identifiers 001, 002, and 003, it is determined that the type of the image that can be shot by the currently accessed entity camera is a two-dimensional image, a three-dimensional image, and an infrared image, the type of the target image that needs to be obtained currently is a three-dimensional image, and the type of the image that can be shot by the currently accessed entity camera includes a three-dimensional image, then the channel corresponding to the channel identifier 003 is used as the target transmission channel.
Step S304, acquiring an original image shot by the entity camera through a target transmission channel;
specifically, a preset storage area of a currently accessed entity camera is accessed through a target transmission channel, and an original image shot by the entity camera is acquired in the preset storage area.
Therefore, the original image shot by the entity camera is obtained through the determined transmission channel corresponding to the currently accessed entity camera and the transmission channel, and the corresponding target abstract camera is obtained according to the original image in the subsequent process.
Since the plurality of abstract cameras included in the abstract layer correspond to image types that can be captured by different types of entity cameras, a corresponding target abstract camera can be obtained according to the image type of the original image, specifically, as shown in fig. 4, step S104 includes:
step S104-2: determining the image type of an original image;
specifically, when step S102 specifically includes step S202, step S104-2 includes: determining the image type corresponding to the transmission channel for receiving the original image as the image type of the original image;
for example, if the image type corresponding to the transmission channel receiving the original image is an infrared image, it is determined that the image type of the original image is an infrared image.
When the step S102 specifically includes the foregoing steps S302 to S304, the step S104-2 includes: and determining the image type corresponding to the selected target transmission channel as the image type of the original image.
For example, if the image type corresponding to the selected target transmission channel is a two-dimensional image, the image type of the original image is determined to be the two-dimensional image.
Step S104-4: and acquiring a corresponding target abstract camera in the abstract layer according to the image type of the original image.
Specifically, according to the image type of the original image and the corresponding relationship between the abstract camera and the image type, a corresponding target abstract camera is obtained from a plurality of abstract cameras included in the abstract layer.
Therefore, the image type of the original image is determined, the corresponding target abstract camera is obtained in the abstract layer according to the image type of the original image, so that the conversion from the entity camera to the abstract camera is realized, the original image shot by the entity camera is converted into the standard image corresponding to the target camera in the subsequent process, the standard image is used by an application program, the application program can directly use the image shot by the entity camera without sensing the type of the entity camera which is accessed currently, and the transaction processing rate based on the image is improved.
In order to quickly and accurately convert an original image captured by a physical camera into a standard image corresponding to a target camera, in one or more embodiments of the present specification, a standard parameter of the standard image corresponding to each abstract camera in an abstract layer is preset, and based on this, as shown in fig. 5, step S106 includes:
s106-2, acquiring standard parameters corresponding to the target abstract camera;
specifically, a parameter interface corresponding to the target abstract camera is called to obtain a standard parameter corresponding to the target abstract camera; the standard parameters include resolution, saturation, contrast, format, etc. of the image.
And S106-4, if the parameters of the original image are different from the acquired standard parameters, converting the original image into a standard image with the parameters as the standard parameters.
Specifically, determining parameters of an original image, judging whether the parameters of the original image are the same as the acquired standard parameters, and if so, determining the original image as a standard image; if not, the original image is converted into a standard image with the parameters as standard parameters.
As an example, the image type of the original image is a two-dimensional image, the parameters include resolution of 1024 × 768, saturation of 67%, contrast of 100:1 and format of YUV, the acquired standard parameters include resolution of 1920 × 1180, saturation of 80%, contrast of 120:1 and format of RGB, and if the parameters of the original image are determined to be different from the acquired standard parameters, the original image is converted into the standard image with resolution of 1920 × 1180, saturation of 80%, contrast of 120:1 and format of RGB; for direct use by the application without the need to sense the type of physical camera currently being accessed.
In a specific embodiment, taking an example that an entity camera currently connected stores an original image after shooting the original image, as shown in fig. 6, the method includes:
step S402, determining a transmission channel corresponding to the entity camera which is accessed currently, and recording a channel identifier of the transmission channel;
step S404, selecting a target transmission channel from the transmission channels corresponding to the recorded channel identifiers;
step S406, acquiring an original image shot by the currently accessed entity camera shooting through a target transmission channel;
step S408, determining the image type corresponding to the target transmission channel as the image type of the original image;
step S410, acquiring a corresponding target abstract camera in an abstract layer according to the image type of an original image, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to the image types which can be shot by different types of entity cameras;
step S412, acquiring a standard parameter corresponding to the target abstract camera;
step S414, determining the parameters of the original image, judging whether the parameters of the original image are the same as the acquired standard parameters, if so, executing step S416, otherwise, executing step S418;
step S416, determining the original image as a standard image;
in step S418, the original image is converted into a standard image with standard parameters.
For a specific implementation process of step S402 to step S418, reference may be made to the foregoing related description, and repeated details are not repeated.
Through the above operations, unification of different types of entity cameras is realized, which is still described here by taking the foregoing certain supermarket as an example, see fig. 7, where fig. 7 is a schematic diagram of upgrading and replacing a camera provided in one or more embodiments of the present specification; by arranging the image processing device capable of executing the operation in each IoT settlement device, when the camera A, the camera B and the camera C need to be upgraded, only the abstract layer needs to be maintained, and the upgrading maintenance efficiency is improved; when the camera a needs to be replaced by the camera C, only the camera a installed in the IoT settlement device at the cashier No. 1 needs to be replaced by the camera C, and the image processing device and the application program 1 arranged in the IoT settlement device at the cashier No. 1 do not need to be replaced; when the replaced camera C is upgraded, only the abstract layer needs to be maintained; meanwhile, when the application program 1 uses the image shot by the camera C, the type of the camera C does not need to be sensed, the complexity of replacement operation is reduced, and the compatibility is improved.
In one or more embodiments of the present specification, an abstraction layer technology is adopted, and a plurality of abstract cameras are provided in an abstraction layer, where the abstract cameras correspond to types of images that can be shot by different types of entity cameras, so that when an original image shot by a currently accessed entity camera is obtained, a target abstract camera corresponding to the original image is obtained in the abstraction layer, and the original image is converted into a standard image corresponding to the target abstract camera, so as to be used by an application program. The method realizes the unification of the entity cameras of different types, and for the upgrading maintenance of the entity cameras, only the abstract layer is maintained without maintaining a plurality of different driving programs, so that the upgrading maintenance efficiency is improved; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
On the basis of the same technical concept, one or more embodiments of the present specification further provide a processing apparatus for processing an image captured by a camera, corresponding to the processing method for an image captured by a camera described in fig. 2 to 6. Fig. 8 is a schematic block diagram of a processing apparatus for processing images captured by a camera according to one or more embodiments of the present disclosure, the apparatus being configured to execute the processing method for images captured by a camera described in fig. 2 to 6, and as shown in fig. 8, the apparatus includes:
a first obtaining module 501, configured to obtain an original image captured by a currently accessed entity camera;
a second obtaining module 502, configured to obtain a target abstract camera corresponding to the original image in an abstract layer, where the abstract layer includes a plurality of abstract cameras, and the abstract cameras correspond to types of images that can be captured by a plurality of entity cameras;
a converting module 503, configured to convert the original image into a standard image corresponding to the target abstract camera.
In one or more embodiments of the present description, an abstraction layer technology is used to implement unification of different types of entity cameras, and for upgrade maintenance of the entity cameras, only the abstraction layer needs to be maintained without maintaining a plurality of different drivers, thereby improving upgrade maintenance efficiency; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
Optionally, the second obtaining module 502 determines an image type of the original image; and acquiring a corresponding target abstract camera in an abstract layer according to the image type of the original image.
Optionally, the apparatus further comprises: a determination module;
the determining module determines a transmission channel corresponding to the currently accessed entity camera before the first obtaining module 501 obtains the original image shot by the currently accessed entity camera;
correspondingly, the first obtaining module 501 obtains the original image shot by the currently accessed entity camera through the transmission channel.
Optionally, the determining module sends corresponding image type information to the currently accessed entity camera through the transmission channels corresponding to the image types that the entity cameras of different types can shoot; and if the image type information is successfully sent, determining the corresponding transmission channel as the transmission channel corresponding to the currently accessed entity camera.
Optionally, the first obtaining module 501 receives an original image sent by a currently accessed entity camera through a corresponding transmission channel;
correspondingly, the second obtaining module 502 determines an image type corresponding to a transmission channel for receiving the original image as the image type of the original image.
Optionally, the first obtaining module 501 selects a target transmission channel from transmission channels corresponding to a currently accessed entity camera; acquiring an original image shot by the entity camera through the target transmission channel;
correspondingly, the second obtaining module 502 determines the image type corresponding to the target transmission channel as the image type of the original image.
Optionally, the conversion module 503 is configured to obtain a standard parameter corresponding to the target abstract camera; and if the parameters of the original image are different from the standard parameters, converting the original image into a standard image with the parameters as the standard parameters.
Optionally, the abstract camera includes at least one of a two-dimensional abstract camera, a three-dimensional abstract camera, an infrared abstract camera, and an iris abstract camera.
The processing apparatus for shooting images by using a camera, which is provided in one or more embodiments of the present description, can acquire an original image shot by a currently accessed entity camera, and acquire a target abstract camera corresponding to the original image in an abstraction layer, where the abstraction layer includes a plurality of abstract cameras, and the abstract cameras correspond to types of images that can be shot by the plurality of entity cameras; and converting the original image into a standard image corresponding to the target abstract camera so as to be used by an application program. The unification of different types of entity cameras is realized, and for the upgrading maintenance of the entity cameras, only the abstract layer is maintained without maintaining a plurality of different driving programs, so that the upgrading maintenance efficiency is improved; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
It should be noted that, the embodiment of the processing apparatus related to the image captured by the camera in the present application and the embodiment of the processing method related to the image captured by the camera in the present application are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the processing method corresponding to the foregoing description, and repeated details are not repeated.
Further, corresponding to the methods shown in fig. 2 to fig. 6, based on the same technical concept, one or more embodiments of the present specification further provide a processing apparatus for capturing an image by a camera, the apparatus being configured to execute the processing method for capturing an image by a camera, and fig. 9 is a schematic structural diagram of the processing apparatus for capturing an image by a camera according to one or more embodiments of the present specification.
As shown in fig. 9, the processing device for capturing images by the camera may have a relatively large difference due to different configurations or performances, and may include one or more processors 601 and a memory 602, where one or more stored applications or data may be stored in the memory 602. Wherein the memory 602 may be transient or persistent storage. The application program stored in memory 602 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in a processing device that takes images by a camera. Still further, the processor 601 may be arranged in communication with the memory 602 to execute a series of computer executable instructions in the memory 602 on a processing device where the camera takes images. The processing apparatus for capturing images by the camera may also include one or more power supplies 603, one or more wired or wireless network interfaces 604, one or more input-output interfaces 605, one or more keyboards 606, and the like.
In a particular embodiment, a processing device for capturing images with a camera includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions in the processing device for capturing images with the camera, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring an original image shot by a currently accessed entity camera;
acquiring a target abstract camera corresponding to the original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by different types of entity cameras;
and converting the original image into a standard image corresponding to the target abstract camera.
In one or more embodiments of the present description, an abstraction layer technology is used to implement unification of different types of entity cameras, and for upgrade maintenance of the entity cameras, only the abstraction layer needs to be maintained without maintaining a plurality of different drivers, thereby improving upgrade maintenance efficiency; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
Optionally, when executed, the computer-executable instructions obtain, in an abstraction layer, a target abstract camera corresponding to the original image, and include:
determining an image type of the original image;
and acquiring a corresponding target abstract camera in an abstract layer according to the image type of the original image.
Optionally, when executed, the computer-executable instructions, before acquiring an original image captured by a currently accessed entity camera, further include:
determining a transmission channel corresponding to a currently accessed entity camera;
the acquiring of the original image shot by the currently accessed entity camera includes:
and acquiring an original image shot by the currently accessed entity camera through the transmission channel.
Optionally, when executed by the computer executable instructions, the determining a transmission channel corresponding to the currently accessed entity camera includes:
respectively sending corresponding image type information to the currently accessed entity camera through the transmission channels corresponding to the image types which can be shot by the entity cameras of different types, so that the currently accessed entity camera determines the transmission channel corresponding to the currently accessed entity camera according to the image type information;
and if the image type information is successfully sent, determining the corresponding transmission channel as the transmission channel corresponding to the currently accessed entity camera.
Optionally, when executed, the computer-executable instructions obtain, through the transmission channel, an original image captured by a currently accessed entity camera, including:
receiving an original image sent by a currently accessed entity camera through a corresponding transmission channel;
the determining the image type of the original image comprises:
and determining the image type corresponding to the transmission channel for receiving the original image as the image type of the original image.
Optionally, when executed, the computer-executable instructions obtain, through the transmission channel, an original image captured by a currently accessed entity camera, including:
selecting a target transmission channel from transmission channels corresponding to the currently accessed entity camera;
acquiring an original image shot by the entity camera through the target transmission channel;
determining a target transmission channel corresponding to a target image format;
acquiring an original image shot by a currently accessed entity camera through the target transmission channel;
the determining the image type of the original image comprises:
and determining the image type corresponding to the target transmission channel as the image type of the original image.
Optionally, when executed, the computer-executable instructions convert the original image into a standard image corresponding to the target abstract camera, including:
acquiring standard parameters corresponding to the target abstract camera;
and if the parameters of the original image are different from the standard parameters, converting the original image into a standard image with the parameters as the standard parameters.
Optionally, the computer executable instructions, when executed, comprise at least one of a two-dimensional abstract camera, a three-dimensional abstract camera, an infrared abstract camera, and an iris abstract camera.
The processing device for shooting images by using a camera, which is provided in one or more embodiments of the present description, can acquire an original image shot by a currently accessed entity camera, and acquire a target abstract camera corresponding to the original image in an abstraction layer, where the abstraction layer includes a plurality of abstract cameras, and the abstract cameras correspond to types of images that can be shot by the plurality of entity cameras; and converting the original image into a standard image corresponding to the target abstract camera so as to be used by an application program. The unification of different types of entity cameras is realized, and for the upgrading maintenance of the entity cameras, only the abstract layer is maintained without maintaining a plurality of different driving programs, so that the upgrading maintenance efficiency is improved; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
It should be noted that, the embodiment of the processing apparatus related to the image captured by the camera in this specification and the embodiment of the processing method related to the image captured by the camera in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to the implementation of the processing method related to the image captured by the camera, and repeated details are not described again.
Further, based on the same technical concept, corresponding to the methods shown in fig. 2 to fig. 6, one or more embodiments of the present specification further provide a storage medium for storing computer-executable instructions, where in a specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, and the like, and the storage medium stores computer-executable instructions that, when executed by a processor, implement the following processes:
acquiring an original image shot by a currently accessed entity camera;
acquiring a target abstract camera corresponding to the original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by different types of entity cameras;
and converting the original image into a standard image corresponding to the target abstract camera.
In one or more embodiments of the present description, an abstraction layer technology is used to implement unification of different types of entity cameras, and for upgrade maintenance of the entity cameras, only the abstraction layer needs to be maintained without maintaining a plurality of different drivers, thereby improving upgrade maintenance efficiency; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
Optionally, when executed by a processor, the computer-executable instructions stored in the storage medium obtain a target abstract camera corresponding to the original image in an abstract layer, and include:
determining an image type of the original image;
and acquiring a corresponding target abstract camera in an abstract layer according to the image type of the original image.
Optionally, the storage medium stores computer-executable instructions, which when executed by the processor, further include, before acquiring an original image captured by a currently accessed physical camera:
determining a transmission channel corresponding to a currently accessed entity camera;
the acquiring of the original image shot by the currently accessed entity camera includes:
and acquiring an original image shot by the currently accessed entity camera through the transmission channel.
Optionally, when executed by the processor, the determining a transmission channel corresponding to the currently accessed entity camera includes:
respectively sending corresponding image type information to the currently accessed entity camera through the transmission channels corresponding to the image types which can be shot by the entity cameras of different types, so that the currently accessed entity camera determines the transmission channel corresponding to the currently accessed entity camera according to the image type information;
and if the image type information is successfully sent, determining the corresponding transmission channel as the transmission channel corresponding to the currently accessed entity camera.
Optionally, when executed by a processor, the computer-executable instructions stored in the storage medium obtain an original image captured by a currently accessed entity camera through the transmission channel, and include:
receiving an original image sent by a currently accessed entity camera through a corresponding transmission channel;
the determining the image type of the original image comprises:
and determining the image type corresponding to the transmission channel for receiving the original image as the image type of the original image.
Optionally, when executed by a processor, the computer-executable instructions stored in the storage medium obtain an original image captured by a currently accessed entity camera through the transmission channel, and include:
selecting a target transmission channel from transmission channels corresponding to the currently accessed entity camera;
acquiring an original image shot by the entity camera through the target transmission channel;
the determining the image type of the original image comprises:
and determining the image type corresponding to the target transmission channel as the image type of the original image.
Optionally, the computer-executable instructions stored in the storage medium, when executed by the processor, convert the original image into a standard image corresponding to the target abstract camera, and include:
acquiring standard parameters corresponding to the target abstract camera;
and if the parameters of the original image are different from the standard parameters, converting the original image into a standard image with the parameters as the standard parameters.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, cause the abstract camera to include at least one of a two-dimensional abstract camera, a three-dimensional abstract camera, an infrared abstract camera, and an iris abstract camera.
When executed by a processor, computer-executable instructions stored in a storage medium provided in one or more embodiments of the present description obtain an original image captured by a currently accessed entity camera, and obtain a target abstract camera corresponding to the original image in an abstraction layer, where the abstraction layer includes a plurality of abstraction cameras, and the abstraction cameras correspond to types of images that can be captured by the plurality of entity cameras; and converting the original image into a standard image corresponding to the target abstract camera so as to be used by an application program. The unification of different types of entity cameras is realized, and for the upgrading maintenance of the entity cameras, only the abstract layer is maintained without maintaining a plurality of different driving programs, so that the upgrading maintenance efficiency is improved; for the replacement of the entity camera, a driver and an application program do not need to be replaced, so that the complexity of replacement operation is reduced; moreover, for the application program, the corresponding image can be directly used without sensing the type of the entity camera, and the compatibility is improved.
It should be noted that the embodiment related to the storage medium in the present application and the embodiment related to the processing method of the image captured by the camera in the present application are based on the same inventive concept, and therefore, for specific implementation of the embodiment, reference may be made to implementation of the aforementioned corresponding processing method of the image captured by the camera, and repeated details are not described again.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 30 s of the 20 th century, improvements in a technology could clearly be distinguished between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in multiple software and/or hardware when implementing the embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of this document and is not intended to limit this document. Various modifications and changes may occur to those skilled in the art from this document. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.

Claims (14)

1. A processing method for images shot by a camera comprises the following steps:
acquiring an original image shot by a currently accessed entity camera;
acquiring a target abstract camera corresponding to the original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by different types of entity cameras;
and converting the original image into a standard image corresponding to the target abstract camera.
2. The method of claim 1, wherein obtaining a target abstract camera corresponding to the original image in an abstract layer comprises:
determining an image type of the original image;
and acquiring a corresponding target abstract camera in an abstract layer according to the image type of the original image.
3. The method of claim 2, wherein before acquiring the original image captured by the currently accessed physical camera, the method further comprises:
determining a transmission channel corresponding to a currently accessed entity camera;
the acquiring of the original image shot by the currently accessed entity camera includes:
and acquiring an original image shot by the currently accessed entity camera through the transmission channel.
4. The method according to claim 3, wherein the determining a transmission channel corresponding to the currently accessed entity camera includes:
respectively sending corresponding image type information to the currently accessed entity camera through the transmission channel corresponding to the image types which can be shot by the entity cameras of different types;
and if the image type information is successfully sent, determining the corresponding transmission channel as the transmission channel corresponding to the currently accessed entity camera.
5. The method of claim 3, wherein the obtaining of the original image captured by the currently accessed entity camera through the transmission channel comprises:
receiving an original image sent by a currently accessed entity camera through a corresponding transmission channel;
the determining the image type of the original image comprises:
and determining the image type corresponding to the transmission channel for receiving the original image as the image type of the original image.
6. The method of claim 3, wherein the obtaining of the original image captured by the currently accessed entity camera through the transmission channel comprises:
selecting a target transmission channel from transmission channels corresponding to the currently accessed entity camera;
acquiring an original image shot by the entity camera through the target transmission channel;
the determining the image type of the original image comprises:
and determining the image type corresponding to the target transmission channel as the image type of the original image.
7. The method of claim 1, wherein converting the raw image into a standard image corresponding to the target abstract camera comprises:
acquiring standard parameters corresponding to the target abstract camera;
and if the parameters of the original image are different from the standard parameters, converting the original image into a standard image with the parameters as the standard parameters.
8. The method of any of claims 1-7, the abstract camera comprising at least one of a two-dimensional abstract camera, a three-dimensional abstract camera, an infrared abstract camera, and an iris abstract camera.
9. A processing device for shooting images by a camera comprises:
the first acquisition module acquires an original image shot by a currently accessed entity camera;
the second acquisition module is used for acquiring a target abstract camera corresponding to the original image in an abstract layer, wherein the abstract layer comprises a plurality of abstract cameras, and the abstract cameras correspond to image types which can be shot by a plurality of entity cameras;
and the conversion module is used for converting the original image into a standard image corresponding to the target abstract camera.
10. The apparatus of claim 9, wherein the first and second electrodes are disposed on opposite sides of the substrate,
the second acquisition module determines the image type of the original image; and the number of the first and second groups,
and acquiring a corresponding target abstract camera in an abstract layer according to the image type of the original image.
11. The apparatus of claim 9, wherein the first and second electrodes are disposed on opposite sides of the substrate,
the conversion module is used for acquiring standard parameters corresponding to the target abstract camera; and if the parameters of the original image are different from the standard parameters, converting the original image into a standard image with the parameters as the standard parameters.
12. The apparatus of any of claims 9-11, the abstract camera comprising at least one of a two-dimensional abstract camera, a three-dimensional abstract camera, an infrared abstract camera, and an iris abstract camera.
13. A processing apparatus for taking an image by a camera, comprising: a processor; and a memory arranged to store computer executable instructions which, when executed, cause the processor to carry out the steps of the method of processing images taken by a camera of any of claims 1 to 8 above.
14. A storage medium storing computer-executable instructions which, when executed, implement the steps of the processing method of images taken by a camera as claimed in any one of claims 1 to 8.
CN201910738356.8A 2019-08-12 2019-08-12 Processing method, device and equipment for images shot by camera Active CN110602378B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110400207.8A CN113114941B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for shooting image by camera
CN201910738356.8A CN110602378B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for images shot by camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910738356.8A CN110602378B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for images shot by camera

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110400207.8A Division CN113114941B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for shooting image by camera

Publications (2)

Publication Number Publication Date
CN110602378A true CN110602378A (en) 2019-12-20
CN110602378B CN110602378B (en) 2021-03-23

Family

ID=68853915

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910738356.8A Active CN110602378B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for images shot by camera
CN202110400207.8A Active CN113114941B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for shooting image by camera

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110400207.8A Active CN113114941B (en) 2019-08-12 2019-08-12 Processing method, device and equipment for shooting image by camera

Country Status (1)

Country Link
CN (2) CN110602378B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784084A (en) * 2021-09-27 2021-12-10 联想(北京)有限公司 Processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100156906A1 (en) * 2008-12-19 2010-06-24 David Montgomery Shot generation from previsualization of a physical environment
EP2593197A2 (en) * 2010-07-14 2013-05-22 University Court Of The University Of Abertay Dundee Improvements relating to viewing of real-time, computer-generated environments
CN104349020A (en) * 2014-12-02 2015-02-11 北京中科大洋科技发展股份有限公司 Virtual camera and real camera switching system and method
CN105610904A (en) * 2015-12-17 2016-05-25 四川物联亿达科技有限公司 Access service system for unified access equipment
CN106851386A (en) * 2017-03-27 2017-06-13 青岛海信电器股份有限公司 The implementation method and device of augmented reality in television terminal based on android system
CN108304247A (en) * 2017-12-19 2018-07-20 华为技术有限公司 The method and apparatus of access camera, server, readable storage medium storing program for executing
CN109842777A (en) * 2017-11-28 2019-06-04 深圳市祈飞科技有限公司 A kind of industrial personal computer multichannel USB camera application method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775902A (en) * 2017-01-25 2017-05-31 北京奇虎科技有限公司 A kind of method and apparatus of image procossing, mobile terminal
CN108388833A (en) * 2018-01-15 2018-08-10 阿里巴巴集团控股有限公司 A kind of image-recognizing method, device and equipment
CN109151303B (en) * 2018-08-22 2020-12-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100156906A1 (en) * 2008-12-19 2010-06-24 David Montgomery Shot generation from previsualization of a physical environment
EP2593197A2 (en) * 2010-07-14 2013-05-22 University Court Of The University Of Abertay Dundee Improvements relating to viewing of real-time, computer-generated environments
CN104349020A (en) * 2014-12-02 2015-02-11 北京中科大洋科技发展股份有限公司 Virtual camera and real camera switching system and method
CN105610904A (en) * 2015-12-17 2016-05-25 四川物联亿达科技有限公司 Access service system for unified access equipment
CN106851386A (en) * 2017-03-27 2017-06-13 青岛海信电器股份有限公司 The implementation method and device of augmented reality in television terminal based on android system
CN109842777A (en) * 2017-11-28 2019-06-04 深圳市祈飞科技有限公司 A kind of industrial personal computer multichannel USB camera application method and system
CN108304247A (en) * 2017-12-19 2018-07-20 华为技术有限公司 The method and apparatus of access camera, server, readable storage medium storing program for executing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784084A (en) * 2021-09-27 2021-12-10 联想(北京)有限公司 Processing method and device

Also Published As

Publication number Publication date
CN113114941A (en) 2021-07-13
CN113114941B (en) 2023-05-12
CN110602378B (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN105874776B (en) Image processing apparatus and method
US9998651B2 (en) Image processing apparatus and image processing method
US10003743B2 (en) Method, apparatus and computer program product for image refocusing for light-field images
US8704914B2 (en) Apparatus to automatically tag image and method thereof
KR102328098B1 (en) Apparatus and method for focusing of carmea device or an electronic device having a camera module
KR102547104B1 (en) Electronic device and method for processing plural images
US20130021488A1 (en) Adjusting Image Capture Device Settings
US9478036B2 (en) Method, apparatus and computer program product for disparity estimation of plenoptic images
CN108353152B (en) Image processing apparatus and method of operating the same
CN102547090A (en) Digital photographing apparatus and methods of providing pictures thereof
US9323653B2 (en) Apparatus and method for processing data
TWI703492B (en) Method, program and device for controlling user interface
CN110191154B (en) User tag determination method and device
CN108830605B (en) Mobile payment method, device and payment system
CN111461623B (en) Block chain-based warehouse bill creating method, device and equipment
US20200349355A1 (en) Method for determining representative image of video, and electronic apparatus for processing the method
CN104750246A (en) Content display method, head mounted display device and computer program product
CN104156993A (en) Method and device for switching face image in picture
CN110602378B (en) Processing method, device and equipment for images shot by camera
CN114168114A (en) Operator registration method, device and equipment
CN105183571A (en) Function calling method and device
EP3186956B1 (en) Display device and method of controlling therefor
CN105335200A (en) System upgrading method and device
US9959598B2 (en) Method of processing image and electronic device thereof
KR102188685B1 (en) Apparatas and method for generating application packages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200921

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200921

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019454

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant