From Google Maps Tiles to 3D-printed Gift

by Lazzy_Cat in Workshop > 3D Printing

113 Views, 5 Favorites, 0 Comments

From Google Maps Tiles to 3D-printed Gift

1.png

Disclaimer: Some of the points in this guide may not align exactly with the Google EULA. Nevertheless, I am creating this for personal fun and have no intention of mass producing it.

The idea of creating a 3D printed object from Google Maps tiles has intrigued me for quite some time. There are numerous tutorials on YouTube covering this topic, and the method that appeared to be the most straightforward for me involved utilizing RenderDoc. This highly specialized software enables the capturing of a 3D object from a map and importing it into Blender. Initially, this approach seemed promising, but when attempting to convert the resulting mesh into a solid body and preparing it for printing, I encountered several limitations. These challenges were mainly attributed to the fact that the level of detail on the map-derived object is reliant on the capabilities of your GPU. Furthermore, the browser’s tendency to optimize computer resource consumption often leads to the loss of textures along the way. Consequently, significant issues arose with the model at the early mesh stage, which I have personally have not been able to solve. I am not a wizard.

So I decided to go my own way and simultaneously to master another brain-exciting interest — photogrammetry. I’ve always enjoyed stitching photos into panoramas. Why not move into three-dimensional space?

Step 0. Choosing an object. Let it be the Griffith Observatory on Hollywood hills.


Supplies

Software:

  • any screen capture program
  • 3DF Zephyr (free edition)
  • Fusion360
  • PrusaSlicer

Hardware:

  • PC
  • FDM printer
  • PLA filament

Photos of Object

We need to get many photos of the object from different sides. And the object in all photos should have the same scale. The easiest way is to center the object, remove shortcuts and other interface elements that may interfere. Next, start a video capture program (I used FastStoneCapture) and rotate the object very slowly relative to one imaginary line (do not use zoom!), holding the Ctrl key. It is desirable to do all this on a large monitor with maximum resolution and color depth. And in the settings of the capture program use the highest possible frame rate and disable compression (if any).

What is the best angle to choose? It depends on the subject, but 45 deg is the best IMHO. Imagine yourself as a drone and fly it around the perimeter. Then take a few frames with the view set to nadir.

Build a 3D Model

Next we need to build a 3D model of the object from the screen capture. To do this we will need a program that does the magic — 3DF Zephyr. Free edition allows using up to 50 images to build a model. This is more than enough in our case.

So, Start the program -> Workflow -> New Project ->Next -> Import Pictures From Video

Select a video, specify the directory for saving frames, specify the beginning and end of the video. Sample rate is chosen experimentally. If the video in my case lasts 55 seconds at 30 fps, then by choosing 0.7 I will get about 40 frames. Take into account that the frames should have a significant overlap zone. Next, click “Extract Frames and import to workspace”. It would be better to review the frames in the specified folder, delete the unsuitable ones, then re-create the project and import these frames. In my case there are 23 frames.

Click “Next”. The program will automatically calibrate the focal length. Then click “Next” again. I have experimented with different “Categories” and would suggest that for our case, it is better to select “General”. You can also experiment to find the most suitable option for your case. In the “Preset” option, choose “Deep”. Click “Next”, and then “Run”.

The following step will take some time as the program will analyze the images, correlate them using common points, and identify the suitable ones. This process is very similar to stitching a panorama.

In my case, 3 photos out of 22 did not fit. This is a very good result. If in your case there will be a lot of unsuitable pictures, you will need to repeat the procedure of video recording and splitting into frames. Click “Finish”.

Then go to Workflow -> Advanced -> Dense Point Cloud Generation

Hit “Run”. It will be the most time-consuming step, so can get yourself a cup of tea. The result will be similar to what we want:

Next, the obtained point cloud should be converted into a Mesh. Choose Workflow -> Advanced -> Mesh Extraction. Set the same settings in “Categories” and “Presets”, hit “Run”.

We’re almost there. The final touch is left. Workflow -> Textured Mesh Generation. And voila!

Looks quite good, but it is not yet suitable for printing. We got a mesh, i.e. a surface consisting of many small triangles and having no volume. Now, select “Export” -> “Export Textured Mesh” and export it to the OBJ format.

From Mesh to Solid

I used Fusion360 to add volume. I want to clarify that while this may not be the most suitable choice, after receiving advice to use Meshmixer, I attempted it but did not achieve much success. As a result, I proceeded with Fusion360.

Next, let’s import our mesh into Fusion360. Firstly, we need to make some adjustments. The existing mesh has actually a lot of holes that may not be visible due to the level of detail. Go to “Mesh” -> “Repair,” and select the simplest mode which is “Close Holes.” Then, use “Modify” -> “Reduce” to address the numerous details and artifacts that are smaller than the intended layer thickness for printing (0.2 mm). Following this, go to “Modify” -> “Convert Mesh,” to obtain a solid body ready for slicing.


Not that this is what I was expecting, but I’m generally happy with it. Export as OBJ again and we can move on to the slicer.

Slicing

I’m going to use PrusaSlicer. I don’t need such a large object, so I’ll trim it around the edges and (most importantly) make a cut at the bottom to get a flat base.

Houston, we’ve got a problem – there is not enough sole for the model to stand flat. Apparently, this is Fusion’s errors during the “convert mesh” operation. We will have to print this area with support with maximum filling.

So, layer thickness – 0.2 mm, infill – 30% (3d honeycomb), raft on, ironing off. My printer is quite good but actually basic (kingroon kp3s pro) and sometimes has issues with “linear advance”. So I have to print on slow speed.


So… Let’s get it!

And finally (added some color):

Well… Think I got it ;-) Considering the model was created from virtually nothing.

Some Conclusions…

  • Photogrammetry is awesome. At the household level, there is no clue in 3D scanners for thousands of dollars. A smartphone with a good camera and steady hands are enough. Tried digitizing a toy. Look at the detail:

  • I need to learn Meshmixer or something similar. Fusion360 is poorly suited for this kind of work with meshes.