![]() That AI is really doing wonders and it will only get better as the data sets used to train the neural networks get bigger. I've gotta say that the filling of occlusions looks quite realistic even when the point of view changes drastically. It's the inpainting we were not too sure about. Check this post if you are not yet convinced: Getting depth maps from single images using Artificial Intelligence (AI). We all know that MiDaS can create great depth maps from single images. If your depth map is not smooth, it's going to take forever and google colab might disconnect you before the videos are created. If you use your own depth map, make sure that it is grayscale and that it is smooth enough. Then, I bypass MiDaS and use my own depth map which I created with SPM: #2D TO 3D CONVERSION GIMP SOFTWARE#First, I let the software use MiDaS to create the depth map. #2D TO 3D CONVERSION GIMP HOW TO#Here's a video that explains how to run the Google Colab python notebook. Note that the depth map doesn't need to be coming from MiDaS, you can certainly use your own depth map (although you may have to blur it). To visualize the point cloud which is in the ply format, you can use Meshlab or CloudCompare (preferred). It works best with a photograph where the main subject that will be popping out of the. The first step is to select an appropriate photograph. The output of 3d photo inpainting is the MiDaS depth map, a point cloud of the 3d scene and four videos that kinda show off the inpainting (2 of the zoom type a la Ken Burns and two of the wiggle/wobble type). Although the instructions in this step-by-step tutorial are for the GIMP for Windows, you can accomplish this same effect in other image editing software. In the google colab implementation, they use MiDaS to get a depth map from a given reference image and then do extreme inpainting using AI. There's a Google Colab for it, which means we can check it out right there in the browser thanks to Google wthout installing anything and without the need for a gpu card. This paper: "3D Photography using Context-aware Layered Depth Inpainting" by Meng-Li Shih promises that inpainting can be done realistically with AI. Pretty neat, I must say, if the results are up to the hype. So, not only can AI generate depth maps from single images, it can also fill the disoccluded areas. Well, apparently, AI (Artifical Intelligence) can take care of that. ![]() This enables me to avoid creating many layers and make depth maps for each. Some people (not I) are not too keen on this effect and would prefer to see the background magically appears out of thin air. I use gimp to segment the photo using a so-called edge image (using pencil tool). If you are a fan of Facebook 3d photos, you may have observed that these disoccluded areas get blurred. As you probably know, when you have an image and its associated depth map, whenever the point of view changes, areas in the background get disoccluded, that is, they become visible.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |