diff --git a/.vscode/settings.json b/.vscode/settings.json
index 01a2178..69cf3fd 100644
--- a/.vscode/settings.json
+++ b/.vscode/settings.json
@@ -1,6 +1,9 @@
{
"cSpell.words": [
"callsign",
- "centimetre"
+ "centimetre",
+ "grayscale",
+ "inpainting",
+ "rotoscope"
]
}
\ No newline at end of file
diff --git a/content/blog/2022-01-19-monocular-blender.md b/content/blog/2022-01-19-monocular-blender.md
new file mode 100644
index 0000000..4a81da5
--- /dev/null
+++ b/content/blog/2022-01-19-monocular-blender.md
@@ -0,0 +1,75 @@
+---
+layout: page
+title: "Monocular depth mapping in Blender"
+description: "My 3D pipeline is backed by neural networks"
+date: 2022-01-19
+tags: random 3d-pipeline
+draft: true
+extra:
+ uses_katex: false
+---
+
+A while back, I encountered an interesting trend going on over on TikTok. People were turning their photos into videos with 3D camera movements.
+
+Having created content like this before myself in both Adobe After Effects and Blender, I just assumed I had come across a few people who also knew the process for creating [2.5D](https://en.wikipedia.org/wiki/2.5D_(visual_perception)) content. For anyone who has not seen 2.5D content before, check out the video below by the amazing artist [Spencer Miller](https://www.instagram.com/SpencerMiller/), who is well know for his 2.5D and 3D concert videos.
+
+
+
+
+
+Alright. Back to TikTok, here is an example of one of the trend videos I came across. Notice how there is some graphical artifacting near the top and bottom of this video? This made me realize these videos are not your standard 2.5D content, but something else was going on.
+
+
+
+
+
+
+
+There was no way so many people could have suddenly learned how to work in 2.5D, all had the required software, and all had the time to painstakingly rotoscope out every depth level of their photo to make it all look good.
+
+Conveniently, it took very little effort to find out that this was all being done by a video editing app called [CapCut](https://www.capcut.net/). I'll spare you the details of researching this CapCut effect to find out how it works, and we will skip right to the technology powering it.
+
+## Playing with Neural Networks
+
+From my research, this techololgy (called *context-aware inpainting*) stems from a paper called [*3D Photography Using Context-Aware Layered Depth Inpainting*](https://doi.org/10.1109/CVPR42600.2020.00805). I wanted to try replicating this effect in Blender, so I loaded up the [demo for this paper](https://github.com/vt-vl-lab/3d-photo-inpainting), tried it out on some images I had lying around, and immediately ran in to issues with incorrect depth estimation results.
+
+After some experimentation, I decided to take a step back from neural-network-powered inpainting and instead started experimenting with the underlying depth estimation research this paper was build on top of.
+
+The [Embodied AI Foundation](https://www.embodiedaifoundation.org/) has a paper called [Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer](https://doi.org/10.1109/TPAMI.2020.3019967) (much better known as **MiDaS**). This paper and [accompanying Python library](https://github.com/isl-org/MiDaS) describes an implements a high-accuracy method for estimating depth maps from a monocular (single-lense camera) image.
+
+## My goal
+
+My goal for this side-project at this point was to create a "zero-thought, one-click" system for bringing monocular images into Blender as full 3D meshes with projection-mapped textures.
+
+This requires three parts:
+
+- A simple system for creating depth maps from images
+- An in-DCC interface for image importing in Blender
+- Some code to tie everything together and actually create the object
+
+### Using Docker with GPU-passthrough for fast depth computation
+
+I happen to have grabbed myself an NVIDIA graphics card with around 4800 CUDA cores last year with the plan of using it for 3D rendering and machine learning experimentation, so my top priority was to make sure I could actually use it for this project.
+
+Luckily, NVIDIA has a solution for doing just this through their project called the [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-docker) (aka `nvidia-docker`).
+
+> The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.
+> \[source: [NVIDIA](https://github.com/NVIDIA/nvidia-docker#introduction)\]
+
+Essentially, this toolkit leverages an existing Docker Engine on a host, and provides a bit of a "side channel" for containers with the appropriate client software to access the Host's GPU resources.
+
+Using the toolkit, I threw together a quick project called `midas-depth-solve` that provides a Docker container to run MiDaS through a little [batch-processing wrapper script](https://github.com/Ewpratten/midas-depth-solve/blob/master/solve.py) I wrote. Simply provide a directory full of images in whatever format you'd like along with some configuration flags, and it will spit out each image as a grayscale depth map.
+
+{{ github(repo="ewpratten/midas-depth-solve") }}
+
+
+Information on how to use this container stand-alone yourself can be found in the project README.
+
+An example of an output from MiDaS is shown below. I have boosted the exposure a lot to make it easier to see the depth levels. Generally, depth maps are low-contrast.
+
+
+
+### The Blender plugin
+
+### Actually creating textured 3D meshes
+
diff --git a/sass/styles/layout.scss b/sass/styles/layout.scss
index 630f77c..ca75fd5 100644
--- a/sass/styles/layout.scss
+++ b/sass/styles/layout.scss
@@ -145,7 +145,16 @@ ul {
box-shadow: rgba(0, 0, 0, 0.24) 0px 3px 8px;
transition: all 0.1s ease 0s;
- &:hover{
+ &:hover {
box-shadow: rgba(0, 0, 0, 0.5) 0px 3px 15px;
}
}
+
+#instagram-embed-0 {
+ margin: auto !important;
+}
+
+.tiktok-embed {
+ border: none !important;
+ margin: auto !important;
+}
diff --git a/static/images/posts/monocular-blender/exaggerated-depth.png b/static/images/posts/monocular-blender/exaggerated-depth.png
new file mode 100644
index 0000000..1b7eed7
Binary files /dev/null and b/static/images/posts/monocular-blender/exaggerated-depth.png differ