<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator><link href="https://burak.kakilli.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://burak.kakilli.com/" rel="alternate" type="text/html" /><updated>2025-06-22T15:39:20-04:00</updated><id>https://burak.kakilli.com/feed.xml</id><title type="html">Burak Kakillioglu</title><subtitle>Burak Kakillioglu&apos;s personal website and blog.</subtitle><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><entry><title type="html">Share Host GPU with LXC containers in Proxmox</title><link href="https://burak.kakilli.com/proxmox-gpu-share/" rel="alternate" type="text/html" title="Share Host GPU with LXC containers in Proxmox" /><published>2024-10-26T18:30:00-04:00</published><updated>2024-10-26T18:30:00-04:00</updated><id>https://burak.kakilli.com/proxmox-gpu</id><content type="html" xml:base="https://burak.kakilli.com/proxmox-gpu-share/"><![CDATA[<p>You have a Proxmox server with a GPU and want to enable hardware accelleration on possibly different services, such as Ollama, Plex, Frigate etc. One option is to create a VM on Proxmox and do a GPU-passthrough to that VM. Then your services may have access to the GPU in that VM.</p>

<p>What if you want to keep your services isolated and don’t want them in the same VM? Then GPU-passthrough is off the table since you may only pass it to a single VM. In this case, there is a second option which enables you to share your GPU with multiple services that don’t necessarily run in the same VM. The catch, however, is that you cannot use virtual machines (VM). So you have to use LXC containers with this solution. This is not necessarily bad option, since the chances are near-zero for an LXC container being unable to run standalone services.</p>

<p>Pros:</p>
<ul>
  <li>Share GPU with multiple LXC containers.</li>
  <li>Isolated services with hardware acceleration.</li>
  <li>LXC =&gt; Easier maintenance, deployment, backup and restore.</li>
</ul>

<p>Cons:</p>
<ul>
  <li>Does not work with VMs.</li>
  <li>Does not work if GPU-passthrough is done with another VM.</li>
</ul>

<p class="notice--info"><strong>Shout out:</strong> These instuctions are majorly based on <a href="https://yomis.blog/nvidia-gpu-in-proxmox-lxc" target="_blank">Yomi’s excellent blog post here</a>. Feel free to follow that if you choose to, or fail with the instructions below.</p>

<p>My proxmox host is Debian (11) Bullseye, and LXC container is Ubuntu (22.04) Jammy Jellyfish.</p>

<h2 id="install-nvidia-driver-on-the-proxmox-host">Install Nvidia driver on the Proxmox host</h2>

<p class="notice--warning"><strong>Important:</strong> Make sure the GPU is not passed through any existing VM on the Proxmox supervisor.</p>

<h3 id="download-driver">Download driver</h3>

<p>We will not use apt repository to install the driver, but instead download the <code class="language-plaintext highlighter-rouge">.run</code> file from Nvidia servers. This is crucial because we want to install the exact same driver in both Proxmox host and LXC container.</p>

<p>Find a recent driver from Nvidia archive:<br />
<a href="https://download.nvidia.com/XFree86/Linux-x86_64/" target="_blank">https://download.nvidia.com/XFree86/Linux-x86_64/</a></p>

<p class="notice--info"><strong>Note:</strong> I used version <code class="language-plaintext highlighter-rouge">550.127.05</code>. Check your application’s requirements to choose an appropriate version.</p>

<p class="notice--warning"><strong>Tip:</strong> Use <a href="https://www.nvidia.com/en-us/drivers/" target="_blank">Nvidia’s driver search tool</a> if the link above does not work.</p>

<h3 id="install-driver">Install driver</h3>
<p>Run <code class="language-plaintext highlighter-rouge">.run</code> file with <code class="language-plaintext highlighter-rouge">--dkms</code> flag, which is important to install the kernel modules. Do not install display-related (xorg etc.) modules during installation.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>sh NVIDIA-Linux-x86_64-550.127.05.run <span class="nt">--dkms</span>
</code></pre></div></div>
<p>Reboot proxmox host. When you log back in, run <code class="language-plaintext highlighter-rouge">nvidia-smi</code> to check if the driver installed correctly:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nvidia-smi
Sat Oct 26 23:19:53 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05   Driver Version: 550.127.05   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|<span class="o">===============================</span>+<span class="o">======================</span>+<span class="o">======================</span>|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
|  0%   47C    P8    13W / 180W |      1MiB /  8117MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|<span class="o">=============================================================================</span>|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
</code></pre></div></div>

<p class="notice--warning"><strong>Remember:</strong> You may need to repeat this process (possibly with more recent driver version) if you upgrade the proxmox kernel (e.g., upgrade proxmox from Debian 11 to Debian 12).</p>

<h2 id="share-gpu-with-lxc-container">Share GPU with LXC Container</h2>

<p>Next, we will share the kernel modules on the proxmox host with an LXC container.</p>

<p>You may use an existing LXC container. Otherwise create a new one if you haven’t done already.</p>

<h3 id="lxc-configuration-on-the-proxmox-host">LXC Configuration on the Proxmox host</h3>

<p>Still on the proxmox host, list nvidia devices:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">ls</span> <span class="nt">-al</span> /dev/nvidia<span class="k">*</span>
crw-rw-rw- 1 root root 195,   0 Oct 25 22:57 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Oct 25 22:57 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Oct 25 22:57 /dev/nvidia-modeset
crw-rw-rw- 1 root root 234,   0 Oct 25 22:57 /dev/nvidia-uvm
crw-rw-rw- 1 root root 234,   1 Oct 25 22:57 /dev/nvidia-uvm-tools
</code></pre></div></div>
<p>Note down the numbers in the column next to the group column. They are <code class="language-plaintext highlighter-rouge">195</code> and <code class="language-plaintext highlighter-rouge">234</code> in my case, yours could be different.</p>

<p>My LXC container ID is <code class="language-plaintext highlighter-rouge">105</code>. I opened its configuration file with <code class="language-plaintext highlighter-rouge">nano</code>. You may as well use other preferred text editor, such as <code class="language-plaintext highlighter-rouge">vi</code>, <code class="language-plaintext highlighter-rouge">vim</code>, <code class="language-plaintext highlighter-rouge">nvim</code> etc.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nano /etc/pve/lxc/105.conf
</code></pre></div></div>
<p>Append the following to the .conf file. Remember to use the numbers you found earlier (<code class="language-plaintext highlighter-rouge">195</code> and <code class="language-plaintext highlighter-rouge">234</code>).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 234:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=f&gt;
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
</code></pre></div></div>

<p>Configuration of the Proxmox host is complete here. Rest of the instructions are to be done in the LXC container.</p>

<h3 id="install-driver-on-lxc-container">Install driver on LXC container</h3>
<p>Start the LXC and switch to console. Download the exact same <code class="language-plaintext highlighter-rouge">NVIDIA-Linux-x86_64-550.127.05.run</code> driver into the LXC container and run with <code class="language-plaintext highlighter-rouge">--no-kernel-module</code> flag to skip the installation of kernel modules. It will share them with the Proxmox host.</p>

<p>Do not install display-related (xorg etc.) modules during installation.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>sh NVIDIA-Linux-x86_64-550.127.05.run <span class="nt">--no-kernel-module</span>
</code></pre></div></div>
<p>Reboot the LXC container and run <code class="language-plaintext highlighter-rouge">nvidia-smi</code> after login. You should see the same output as in the proxmox host:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nvidia-smi
Sun Oct 27 04:16:31 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05   Driver Version: 550.127.05   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|<span class="o">===============================</span>+<span class="o">======================</span>+<span class="o">======================</span>|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   48C    P8    13W / 180W |      1MiB /  8117MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|<span class="o">=============================================================================</span>|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
</code></pre></div></div>
<p>You can now run your service in the LXC container with GPU hardware acceleration!</p>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[Share GPU to multiple LXC containers and Virtual Machines in Proxmox]]></summary></entry><entry><title type="html">Frigate on Proxmox with GPU</title><link href="https://burak.kakilli.com/frigate-on-proxmox/" rel="alternate" type="text/html" title="Frigate on Proxmox with GPU" /><published>2024-10-26T00:00:00-04:00</published><updated>2024-10-27T06:30:00-04:00</updated><id>https://burak.kakilli.com/frigate-gpu-docker</id><content type="html" xml:base="https://burak.kakilli.com/frigate-on-proxmox/"><![CDATA[<p>By default <a href="https://docs.frigate.video" target="_blank"><strong>Frigate</strong></a> is offered as a Docker container that makes the installation process simple and relatively unified for various users. And for the <code class="language-plaintext highlighter-rouge">docker</code> engine itself, you may opt for running it directly on a host OS, in a VM, or even in an upper virtualization layer.</p>

<p>Although running <strong>Frigate</strong> itself on any docker environment is straigtforward, it could be more challenging to setup the GPU acceleration as the level of virtualization increases. This guide will hopefully help those who want to run <strong>Frigate</strong> on their <strong>Proxmox</strong> server with an <strong>Nvidia GPU</strong> for ML acceleration.</p>

<p><em>This will be realized through running an LXC container that hosts a docker engine.</em></p>

<p>In a nutshell:</p>

<ol>
  <li>Install Nvidia driver on the Proxmox host.</li>
  <li>Create LXC container and share nvidia kernel modules of host with the LXC container.</li>
  <li>Install Docker and Nvidia Container Runtime in LXC container.</li>
  <li>Install Frigate with hardware acceleration.</li>
</ol>

<p class="notice--success"><strong>Note:</strong> First two steps are also covered in <a href="/proxmox-gpu-share/" target="_blank">the other post</a>. They are repeated here for the sake of completeness.</p>

<p class="notice--info"><strong>Shout out:</strong> Step one and step two are largely based on <a href="https://yomis.blog/nvidia-gpu-in-proxmox-lxc" target="_blank">Yomi’s excellent blog post here</a>. Feel free to follow that if you choose to, or failed with the instruction below.</p>

<p>My proxmox host is Debian (11) Bullseye, and LXC container is Ubuntu (22.04) Jammy Jellyfish.</p>

<h2 id="install-nvidia-driver-on-the-proxmox-host">Install Nvidia driver on the Proxmox host</h2>

<p class="notice--warning"><strong>Important:</strong> Make sure the GPU is not passed through any existing VM on the Proxmox supervisor.</p>

<h3 id="download-driver">Download driver</h3>

<p>We will not use apt repository to install the driver, but instead download the <code class="language-plaintext highlighter-rouge">.run</code> file from Nvidia servers. This is crucial because we want to install the exact same driver in both Proxmox host and LXC container.</p>

<p>Find a recent driver from Nvidia archive:<br />
<a href="https://download.nvidia.com/XFree86/Linux-x86_64/" target="_blank">https://download.nvidia.com/XFree86/Linux-x86_64/</a></p>

<p class="notice--info"><strong>Attention:</strong> NVidia Tensor RT detector on Frigate requires driver version <code class="language-plaintext highlighter-rouge">&gt;=530</code> (<a href="https://docs.frigate.video/configuration/object_detectors#minimum-hardware-support">see details</a>). I used version <code class="language-plaintext highlighter-rouge">550.127.05</code>, but you may choose more recent version.</p>

<p class="notice--warning"><strong>Tip:</strong> Use <a href="https://www.nvidia.com/en-us/drivers/" target="_blank">Nvidia’s driver search tool</a> if the link above does not work.</p>

<h3 id="install-driver">Install driver</h3>
<p>Run <code class="language-plaintext highlighter-rouge">.run</code> file with <code class="language-plaintext highlighter-rouge">--dkms</code> flag, which is important to install the kernel modules. Do not install display-related (xorg etc.) modules during installation.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>sh NVIDIA-Linux-x86_64-550.127.05.run <span class="nt">--dkms</span>
</code></pre></div></div>
<p>Reboot proxmox host. When you log back in, run <code class="language-plaintext highlighter-rouge">nvidia-smi</code> to check if the driver installed correctly:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nvidia-smi
Sat Oct 26 23:19:53 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05   Driver Version: 550.127.05   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|<span class="o">===============================</span>+<span class="o">======================</span>+<span class="o">======================</span>|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
|  0%   47C    P8    13W / 180W |      1MiB /  8117MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|<span class="o">=============================================================================</span>|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
</code></pre></div></div>

<p class="notice--warning"><strong>Remember:</strong> You may need to repeat this process (possibly with more recent driver version) if you upgrade the proxmox kernel (e.g., upgrade proxmox from Debian 11 to Debian 12).</p>

<h2 id="share-gpu-with-lxc-container">Share GPU with LXC Container</h2>

<p>Next, we will share the kernel modules on the proxmox host with an LXC container.</p>

<p>You may use an existing LXC container. Otherwise create a new one if you haven’t done already.</p>

<h3 id="lxc-configuration-on-the-proxmox-host">LXC Configuration on the Proxmox host</h3>

<p>Still on the proxmox host, list nvidia devices:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">ls</span> <span class="nt">-al</span> /dev/nvidia<span class="k">*</span>
crw-rw-rw- 1 root root 195,   0 Oct 25 22:57 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Oct 25 22:57 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Oct 25 22:57 /dev/nvidia-modeset
crw-rw-rw- 1 root root 234,   0 Oct 25 22:57 /dev/nvidia-uvm
crw-rw-rw- 1 root root 234,   1 Oct 25 22:57 /dev/nvidia-uvm-tools
</code></pre></div></div>
<p>Note down the numbers in the column next to the group column. They are <code class="language-plaintext highlighter-rouge">195</code> and <code class="language-plaintext highlighter-rouge">234</code> in my case, yours could be different.</p>

<p>My LXC container ID is <code class="language-plaintext highlighter-rouge">105</code>. I opened its configuration file with <code class="language-plaintext highlighter-rouge">nano</code>. You may as well use other preferred text editor, such as <code class="language-plaintext highlighter-rouge">vi</code>, <code class="language-plaintext highlighter-rouge">vim</code>, <code class="language-plaintext highlighter-rouge">nvim</code> etc.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nano /etc/pve/lxc/105.conf
</code></pre></div></div>
<p>Append the following to the .conf file. Remember to use the numbers you found earlier (<code class="language-plaintext highlighter-rouge">195</code> and <code class="language-plaintext highlighter-rouge">234</code>).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 234:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=f&gt;
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
</code></pre></div></div>

<p>Configuration of the Proxmox host is complete here. Rest of the instructions are to be done in the LXC container.</p>

<h3 id="install-driver-on-lxc-container">Install driver on LXC container</h3>
<p>Start the LXC and switch to console. Download the exact same <code class="language-plaintext highlighter-rouge">NVIDIA-Linux-x86_64-550.127.05.run</code> driver into the LXC container and run with <code class="language-plaintext highlighter-rouge">--no-kernel-module</code> flag to skip the installation of kernel modules. It will share them with the Proxmox host.</p>

<p>Do not install display-related (xorg etc.) modules during installation.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>sh NVIDIA-Linux-x86_64-550.127.05.run <span class="nt">--no-kernel-module</span>
</code></pre></div></div>
<p>Reboot the LXC container and run <code class="language-plaintext highlighter-rouge">nvidia-smi</code> after login. You should see the same output as in the proxmox host:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nvidia-smi
Sun Oct 27 04:16:31 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05   Driver Version: 550.127.05   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|<span class="o">===============================</span>+<span class="o">======================</span>+<span class="o">======================</span>|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   48C    P8    13W / 180W |      1MiB /  8117MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|<span class="o">=============================================================================</span>|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
</code></pre></div></div>
<p>Now our LXC container can use GPU!</p>

<h2 id="install-docker-and-nvidia-container-runtime">Install Docker and Nvidia Container Runtime</h2>

<p>Install the docker engine by following the official instructions:<br />
<a href="https://docs.docker.com/engine/install/ubuntu" target="_blank">https://docs.docker.com/engine/install/ubuntu</a></p>

<h3 id="install-nvidia-driver-toolkit">Install nvidia-driver-toolkit</h3>

<p>Add <code class="language-plaintext highlighter-rouge">nvidia-driver-toolkit</code> to apt sources.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ distribution</span><span class="o">=</span><span class="si">$(</span><span class="nb">.</span> /etc/os-release<span class="p">;</span><span class="nb">echo</span> <span class="nv">$ID$VERSION_ID</span><span class="si">)</span> <span class="se">\</span>
  <span class="o">&amp;&amp;</span> curl <span class="nt">-fsSL</span> https://nvidia.github.io/libnvidia-container/gpgkey | gpg <span class="nt">--dearmor</span> <span class="nt">-o</span> /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg <span class="se">\</span>
  <span class="o">&amp;&amp;</span> curl <span class="nt">-s</span> <span class="nt">-L</span> https://nvidia.github.io/libnvidia-container/<span class="nv">$distribution</span>/libnvidia-container.list | <span class="se">\</span>
        <span class="nb">sed</span> <span class="s1">'s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'</span> | <span class="se">\</span>
        <span class="nb">tee</span> /etc/apt/sources.list.d/nvidia-container-toolkit.list
</code></pre></div></div>
<p>Install the toolkit and restart docker.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>apt update
<span class="nv">$ </span>apt <span class="nb">install </span>nvidia-container-toolkit
<span class="nv">$ </span>systemctl docker restart
</code></pre></div></div>
<h3 id="configure-nvidia-runtime">Configure nvidia runtime</h3>

<p>Open <code class="language-plaintext highlighter-rouge">/etc/nvidia-container-runtime/config.toml</code> file, find <code class="language-plaintext highlighter-rouge">#no-cgroups=false</code> line and set it to <code class="language-plaintext highlighter-rouge">true</code>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
no-cgroups=true
...
</code></pre></div></div>
<p>Finally run:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>nvidia-ctk runtime configure <span class="nt">--runtime</span><span class="o">=</span>docker
<span class="nv">$ </span>nvidia-ctk cdi generate <span class="nt">--output</span><span class="o">=</span>/etc/cdi/nvidia.yaml
</code></pre></div></div>

<p class="notice--warning"><strong>Attention:</strong> You may need to run the last line above before running any container (not 100% sure), so keep it somewhere handy.</p>

<h3 id="test-gpu-access-in-a-docker-container">Test GPU access in a docker container</h3>
<p>Run a simple <code class="language-plaintext highlighter-rouge">Ubuntu</code> container to test GPU access.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker run <span class="nt">-it</span> <span class="nt">--rm</span> <span class="nt">--gpus</span><span class="o">=</span>all ubuntu:22.04 nvidia-smi
Unable to find image <span class="s1">'ubuntu:22.04'</span> locally
22.04: Pulling from library/ubuntu
6414378b6477: Pull <span class="nb">complete 
</span>Digest: sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97
Status: Downloaded newer image <span class="k">for </span>ubuntu:22.04
Sun Oct 27 04:32:29 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05   Driver Version: 550.127.05   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|<span class="o">===============================</span>+<span class="o">======================</span>+<span class="o">======================</span>|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   47C    P8    13W / 180W |      1MiB /  8117MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|<span class="o">=============================================================================</span>|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
</code></pre></div></div>

<h2 id="install-frigate-with-hardware-acceleration">Install Frigate with Hardware Acceleration</h2>
<p>In this section I will share my minimal config that showcases the main goal of running Frigate with GPU acceleration.</p>

<p class="notice--danger"><strong>Important:</strong> If you are new to Frigate, I urge you to follow the <a href="https://docs.frigate.video/guides/getting_started">official instructions</a> first. They are easy to follow, but also pretty substantial.</p>

<p class="notice--info"><strong>Attention:</strong> Pay special attention to the <a href="https://docs.frigate.video/configuration/hardware_acceleration#nvidia-gpus">Hardware Acceleration (for NVIDIA GPUs)</a> and <a href="https://docs.frigate.video/configuration/object_detectors/#nvidia-tensorrt-detector">Nvidia TensorRT Detector</a> instructions to understand how GPU should be configured.</p>

<h3 id="configure-and-run-frigate-service">Configure and Run Frigate service</h3>

<p>My <code class="language-plaintext highlighter-rouge">docker-compose.yaml</code> file:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">services</span><span class="pi">:</span>
  <span class="na">frigate</span><span class="pi">:</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">frigate</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">unless-stopped</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">ghcr.io/blakeblackshear/frigate:stable-tensorrt</span>
    <span class="na">shm_size</span><span class="pi">:</span> <span class="s2">"</span><span class="s">2048mb"</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">/etc/localtime:/etc/localtime:ro</span>
      <span class="pi">-</span> <span class="s">./config:/config</span>
      <span class="pi">-</span> <span class="s">./storage:/media/frigate</span>
      <span class="pi">-</span> <span class="na">type</span><span class="pi">:</span> <span class="s">tmpfs</span> <span class="c1"># Optional: 1GB of memory, reduces SSD/SD Card wear</span>
        <span class="na">target</span><span class="pi">:</span> <span class="s">/tmp/cache</span>
        <span class="na">tmpfs</span><span class="pi">:</span>
          <span class="na">size</span><span class="pi">:</span> <span class="m">1000000000</span>
    <span class="na">ports</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s2">"</span><span class="s">5000:5000"</span>
      <span class="pi">-</span> <span class="s2">"</span><span class="s">8554:8554"</span> <span class="c1"># RTSP feeds</span>
      <span class="pi">-</span> <span class="s2">"</span><span class="s">8555:8555/tcp"</span> <span class="c1"># WebRTC over tcp</span>
      <span class="pi">-</span> <span class="s2">"</span><span class="s">8555:8555/udp"</span> <span class="c1"># WebRTC over udp</span>

    <span class="na">environment</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">USE_FP16=False</span>
      <span class="pi">-</span> <span class="s">YOLO_MODELS=yolov7-320</span>
      <span class="pi">-</span> <span class="s">TRT_MODEL_PREP_DEVICE=0</span>
      <span class="pi">-</span> <span class="s">CUDA_MODULE_LOADING=LAZY</span>

    <span class="na">runtime</span><span class="pi">:</span> <span class="s">nvidia</span>    
    <span class="na">deploy</span><span class="pi">:</span>
      <span class="na">resources</span><span class="pi">:</span>
        <span class="na">reservations</span><span class="pi">:</span>
          <span class="na">devices</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="na">driver</span><span class="pi">:</span> <span class="s">nvidia</span>
              <span class="c1">#device_ids: ['0'] # this is only needed when using multiple GPUs</span>
              <span class="na">count</span><span class="pi">:</span> <span class="m">1</span> <span class="c1"># number of GPUs</span>
              <span class="na">capabilities</span><span class="pi">:</span> <span class="pi">[</span><span class="nv">gpu</span><span class="pi">]</span>

</code></pre></div></div>
<p>Pay attention to the <code class="language-plaintext highlighter-rouge">environment</code>, <code class="language-plaintext highlighter-rouge">runtime</code>, and <code class="language-plaintext highlighter-rouge">deploy</code> settings. Also note that I am using <code class="language-plaintext highlighter-rouge">frigate:stable-tensorrt</code> version of the docker images, which is version that ships with necessary libraries to run TensorRT models.</p>

<p>Frigate config file <code class="language-plaintext highlighter-rouge">/config/config.yaml</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">go2rtc</span><span class="pi">:</span>
  <span class="na">streams</span><span class="pi">:</span>
    <span class="na">my-rtsp-camera</span><span class="pi">:</span> <span class="c1"># &lt;- for RTSP streams</span>
      <span class="pi">-</span> <span class="s">rtsp://username:password@my-rtsp-camera-address.local:554//h264Preview_01_main</span> <span class="c1"># &lt;- stream which supports video &amp; aac audio</span>
      <span class="pi">-</span> <span class="s">ffmpeg:rtsp_cam#audio=opus</span>   <span class="c1"># &lt;- copy of the stream which transcodes audio to the missing codec (usually will be opus)</span>
<span class="na">cameras</span><span class="pi">:</span>
  <span class="na">my-rtsp-camera</span><span class="pi">:</span>
    <span class="na">ffmpeg</span><span class="pi">:</span>
      <span class="na">hwaccel_args</span><span class="pi">:</span> <span class="s">preset-nvidia-h264</span>
      <span class="na">inputs</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">rtsp://127.0.0.1:8554/my-rtsp-camera</span>
          <span class="na">roles</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">detect</span>
            <span class="pi">-</span> <span class="s">record</span>
    <span class="na">detect</span><span class="pi">:</span>
      <span class="na">width</span><span class="pi">:</span> <span class="m">2560</span>
      <span class="na">height</span><span class="pi">:</span> <span class="m">1440</span>
<span class="na">record</span><span class="pi">:</span>
  <span class="na">enabled</span><span class="pi">:</span> <span class="no">true</span>
  <span class="na">retain</span><span class="pi">:</span>
    <span class="na">days</span><span class="pi">:</span> <span class="m">7</span>
    <span class="na">mode</span><span class="pi">:</span> <span class="s">motion</span>
  <span class="na">events</span><span class="pi">:</span>
    <span class="na">retain</span><span class="pi">:</span>
      <span class="na">default</span><span class="pi">:</span> <span class="m">14</span>
      <span class="na">mode</span><span class="pi">:</span> <span class="s">active_objects</span>
<span class="na">mqtt</span><span class="pi">:</span>
  <span class="na">host</span><span class="pi">:</span> <span class="s">mqtt.server.com</span>

<span class="na">detectors</span><span class="pi">:</span>
  <span class="na">tensorrt</span><span class="pi">:</span>
    <span class="na">type</span><span class="pi">:</span> <span class="s">tensorrt</span>
    <span class="na">device</span><span class="pi">:</span> <span class="m">0</span> <span class="c1">#This is the default, select the first GPU</span>

<span class="na">model</span><span class="pi">:</span>
  <span class="na">input_tensor</span><span class="pi">:</span> <span class="s">nchw</span>
  <span class="na">input_pixel_format</span><span class="pi">:</span> <span class="s">rgb</span>
  <span class="na">width</span><span class="pi">:</span> <span class="m">320</span>
  <span class="na">height</span><span class="pi">:</span> <span class="m">320</span>
</code></pre></div></div>

<p>You do not need to download a model. It will automatically configure and build the model that we specified in <code class="language-plaintext highlighter-rouge">YOLO_MODELS</code> environment variable.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker compose up
...
frigate  | Creating yolov7-320.cfg and yolov7-320.weights
frigate  | 
frigate  | Done.
frigate  | 2024-10-28 04:42:40.009374904  <span class="o">[</span>INFO] Starting go2rtc healthcheck service...
frigate  | 
frigate  | Generating yolov7-320.trt. This may take a few minutes.
...
</code></pre></div></div>
<p>It will complete generating the model and start Frigate service in a few minutes.</p>

<h3 id="check-gpu-usage">Check GPU Usage</h3>

<p>We can see that GPU is used for object detector:</p>

<figure class=" ">
  
    
      <a href="/assets/posts/frigate-on-proxmox/frigate-system-status.png" title="Frigate System Status Dashboard">
          <img src="/assets/posts/frigate-on-proxmox/frigate-system-status.png" alt="Frigate System Status Dashboard" />
      </a>
    
  
  
</figure>

<p>We can also check which processes are being run by the GPU:</p>

<p class="notice--warning"><strong>Attention</strong>: Due to docker limitations <code class="language-plaintext highlighter-rouge">nvidia-smi</code> does not show any process in the LXC or docker containers. Fortunately, it works if we run it on the proxmox host. See the <a href="https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458">issue</a> for details.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># We are in the proxmox host console</span>
<span class="nv">$ </span>nvidia-smi
Mon Oct 28 00:12:04 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|<span class="o">=========================================</span>+<span class="o">========================</span>+<span class="o">======================</span>|
|   0  NVIDIA GeForce GTX 1070        Off |   00000000:01:00.0 Off |                  N/A |
| 26%   52C    P2             37W /  180W |     653MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|<span class="o">=========================================================================================</span>|
|    0   N/A  N/A     35963      C   frigate.detector.tensorrt                     320MiB |
|    0   N/A  N/A     36001      C   ffmpeg                                        330MiB |
+-----------------------------------------------------------------------------------------+
</code></pre></div></div>
<p>Notice that GPU is handling both object detection and H.264 decoding with ffmpeg.</p>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[Running Frigate on Proxmox LXC with Nvidia GPU acceleration]]></summary></entry><entry><title type="html">DIY Li-Ion pack for Taranis Qx7</title><link href="https://burak.kakilli.com/diy-battery-pack/" rel="alternate" type="text/html" title="DIY Li-Ion pack for Taranis Qx7" /><published>2022-06-15T00:00:00-04:00</published><updated>2022-06-15T00:00:00-04:00</updated><id>https://burak.kakilli.com/diy-battery-pack</id><content type="html" xml:base="https://burak.kakilli.com/diy-battery-pack/"><![CDATA[<p>While building my first RC plane, I learned that the takeoff weight of the airplane is very crucial. The rule is, the heavier you are, the shorter you fly. “One does not simply <img src="/assets/posts/diy-battery-pack/One-Does-Not-Simply.jpg" alt="One Does Not Simply" style="height:1.1em; vertical-align:middle;" />” dump so many batteries and expect a longer flight time. This fact made me curious about batteries, and that’s where I discovered Li-Ion cells.</p>

<p>I spent some time on YouTube and surprised how people upgrade the batteries of their electronic devices easily with Li-Ion cells that are very easy to come by (they are very cheap or can be recycled from old laptop batteries, old powerbank batteries etc.) Then I decided to build my own pack for my airplane. But before building that, I wanted to practice with small stuff so I decided to make my transmitter rechargable! If you one Taranis Qx7, it needs 6 AA to operate. Luckily, the AA housing is modular and you can replace it with some other compatible battery (they sell official NiMh packs for Qx7). What I needed was 2 Li-Ion cells, JST connectors and some soldering.</p>

<p><img src="/assets/posts/diy-battery-pack/laptop-battery.jpg" alt="Recycling Laptop Battery" /></p>

<p>I salvaged many Li-Ion 18650 cells from three old laptops and a powerbank which had been waiting to be thrown out. Even though some of the cells were done for good, I still got plenty of healthy ones. After I received my JST connectors and tools, I began soldering. Although it is advised against soldering on the 18650 terminals due to the possibility of degrading the battery by heat, I still went for it since it is a learning experience. Though I used caution not to heat up the battery excessively longer.</p>

<div style="display: flex; gap: 10px;">
  <img src="/assets/posts/diy-battery-pack/pack-1.jpg" alt="DIY Battery Pack Front" style="width: 49%;" />
  <img src="/assets/posts/diy-battery-pack/pack-2.jpg" alt="DIY Battery Pack Back" style="width: 49%;" />
</div>

<p>After soldering the battery terminals I built a harness that will let me charge the battery with my LiPo charger. Basically, the charger requires a main battery cable and a balancing port. Since the charge and discharge currents are very low, I used the same 22 AWG cable to build everything and simply shorted the main battery connectors to the balance terminals. Finally I finished the pack with a Kapton tape for safety and protection.</p>

<p align="center">
  <img src="/assets/posts/diy-battery-pack/2S-3S-harness.jpg" alt="DIY Battery Harness" style="max-width: 70%;" />
</p>

<p>I cannot say it is among my best DIY projects, but I am satisfied with the result. In fact it turned out to be good enough and became a permanent replacement for 6x AA batteries on my controller. Looking forward to upgrading more tools with LiIon!</p>

<p align="center">
  <img src="/assets/posts/diy-battery-pack/final-product-liion-pack.jpg" alt="Final Product" style="max-width: 50%;" />
</p>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[Building a battery pack for Taranis Qx7 transmitter by recycling old Li-Ion cells.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://burak.kakilli.com/assets/posts/diy-battery-pack/thumbnail.jpg" /><media:content medium="image" url="https://burak.kakilli.com/assets/posts/diy-battery-pack/thumbnail.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Long Endurance Reconnaissance UAV</title><link href="https://burak.kakilli.com/uav-project/" rel="alternate" type="text/html" title="Long Endurance Reconnaissance UAV" /><published>2020-09-13T23:00:00-04:00</published><updated>2020-09-13T23:00:00-04:00</updated><id>https://burak.kakilli.com/uav-project</id><content type="html" xml:base="https://burak.kakilli.com/uav-project/"><![CDATA[<p><img src="/assets/posts/uav-project/banner.jpg" alt="Flight Field!" /></p>

<p>The story coming soon :)</p>

<!-- Courtesy of embedresponsively.com -->

<div class="responsive-video-container">
    <iframe src="https://www.youtube-nocookie.com/embed/IJQZDnNWxpA" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
  </div>

<h1 id="building-stages">Building Stages</h1>

<figure class="third ">
  
    
      <a href="/assets/posts/uav-project/gallery/1.jpg" title="Cutting fuselage for wings">
          <img src="/assets/posts/uav-project/gallery/th/1.jpg" alt="Gallery Image 1" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/2.jpg" title="Landing gear mount">
          <img src="/assets/posts/uav-project/gallery/th/2.jpg" alt="Gallery Image 2" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/3.jpg" title="Completed fuselage with frame and landing gear">
          <img src="/assets/posts/uav-project/gallery/th/3.jpg" alt="Gallery Image 3" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/4.jpg" title="Inside of fuselage (after wing mount)">
          <img src="/assets/posts/uav-project/gallery/th/4.jpg" alt="Gallery Image 4" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/5.jpg" title="Wing mount with bolts and nuts">
          <img src="/assets/posts/uav-project/gallery/th/5.jpg" alt="Gallery Image 5" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/6.jpg" title="Attached (poplar dowel) twin booms">
          <img src="/assets/posts/uav-project/gallery/th/6.jpg" alt="Gallery Image 6" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/7.jpg" title="Cut and sealed the A-tail wing">
          <img src="/assets/posts/uav-project/gallery/th/7.jpg" alt="Gallery Image 7" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/8.jpg" title="Tail wing front view">
          <img src="/assets/posts/uav-project/gallery/th/8.jpg" alt="Gallery Image 8" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/9.jpg" title="Servo housing and sealing servo connection">
          <img src="/assets/posts/uav-project/gallery/th/9.jpg" alt="Gallery Image 9" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/10.jpg" title="Completed control surfaces (flaps and ailerons)">
          <img src="/assets/posts/uav-project/gallery/th/10.jpg" alt="Gallery Image 10" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/11.jpg" title="Attached tail">
          <img src="/assets/posts/uav-project/gallery/th/11.jpg" alt="Gallery Image 11" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/12.jpg" title="Tail attachment with wing nuts">
          <img src="/assets/posts/uav-project/gallery/th/12.jpg" alt="Gallery Image 12" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/130.jpg" title="Almost there">
          <img src="/assets/posts/uav-project/gallery/th/13.jpg" alt="Gallery Image 13" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/14.jpg" title="Foam nose to reduce drag">
          <img src="/assets/posts/uav-project/gallery/th/14.jpg" alt="Gallery Image 14" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/15.jpg" title="4x1300mAh battery pack with mount">
          <img src="/assets/posts/uav-project/gallery/th/15.jpg" alt="Gallery Image 15" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/16.jpg" title="Battery attachment. Multiple attachable positions for CG adjustment">
          <img src="/assets/posts/uav-project/gallery/th/16.jpg" alt="Gallery Image 16" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/17.jpg" title="Battery housing inside fuselage">
          <img src="/assets/posts/uav-project/gallery/th/17.jpg" alt="Gallery Image 17" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/18.jpg" title="Manual testing setup before flying">
          <img src="/assets/posts/uav-project/gallery/th/18.jpg" alt="Gallery Image 18" />
      </a>
    
  
    
      <a href="/assets/posts/uav-project/gallery/19.jpg" title="Final checks on the field">
          <img src="/assets/posts/uav-project/gallery/th/19.jpg" alt="Gallery Image 19" />
      </a>
    
  
  
</figure>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[Designing and building an RC Airplane from scratch: My experience]]></summary></entry><entry><title type="html">My first flight!</title><link href="https://burak.kakilli.com/first-flight/" rel="alternate" type="text/html" title="My first flight!" /><published>2020-09-12T23:00:00-04:00</published><updated>2020-09-12T23:00:00-04:00</updated><id>https://burak.kakilli.com/first-flight</id><content type="html" xml:base="https://burak.kakilli.com/first-flight/"><![CDATA[<p>Today was an extraordinary day in my life. One of my childhood dreams came true: I piloted an airplane! Well, I was not completely in charge of the entire operation, clearly. It takes months, if not years, to become a licensed pilot. Nevertheless, nothing stops you to be a co-pilot of an aircraft under the supervision of an instructor!</p>

<p>There was a local flight school offering a 45 minutes “Discovery flight” for $100! It worths every penny. In fact, I learned that this is a common thing for flight schools in order to draw interest for private pilot programs. After two of my friends, who are also crazy about aviation, have tried this, I went after.</p>

<p>The aircraft was a very old Cessna 150, which is probably used mostly for this purpose. My friend and I went there with our families, but only he (who was done this before) was allowed to be on board as a passenger because of the size of the aircraft. The captain pilot Adam was a great guy, who is working as an instructor at the flight school. Although I did my homework and studied all the pre-flight checks, he took care of them. Taxiing was everything but driving a car: You are controlling the gas with your hand and steering with your feet! Nevertheless, it did not take too long for me to adapt the concept. When we finally arrived the runway, he took control of the steering and I took control of the yoke for take-off. We accelerated until the target speed, then I pulled the yoke. The moment of truth. Initially, I did not notice anything since I was very focused on the gauges that indicate the rate of elevation, ground speed etc., which was crucial for flying. Then at some point I turned to my left and looked over the window. I felt enormous freedom. As we climb, things become clearer on the ground. It was an astonishing scene. We have followed the river crossing the city. We cruised at 1000ft altitude along the river for about 15 minutes and then turned back. I will be honest. I felt really sick for a while on the way back. It might be due to the altitude or the crosswind. We followed the river again all the way back to the airport, then Adam took to control for the final approach and landing (which is not a joke). It was one of the smoothest landing I’ve ever seen.</p>

<p>When we arrived to the hangar and get off the aircraft, I felt that my muscles were solid like a rock due to the excitement. I did not notice how hard I was holding the yoke. I sincerely thankful to my friends who made me aware of, and the flight school for this truly exceptional experience. Becoming a pilot is not anymore an impossible dream for me.</p>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[I have flown an airplane today! A must-do-in-a-lifetime experience!]]></summary></entry><entry><title type="html">Analog media controller (DIY)</title><link href="https://burak.kakilli.com/analog-media-controller/" rel="alternate" type="text/html" title="Analog media controller (DIY)" /><published>2020-09-01T00:00:00-04:00</published><updated>2020-09-01T00:00:00-04:00</updated><id>https://burak.kakilli.com/analog-media-controller</id><content type="html" xml:base="https://burak.kakilli.com/analog-media-controller/"><![CDATA[<p>I like analog stuff and really don’t want them to disappear. Controlling devices with knobs and sliders are the most intuitive and psychologically satisfying to me. One day I noticed that I don’t have media control buttons on my keyboard of my office desktop, and I was using my mouse for pretty much all the related stuff. I was not gonna let that happen anymore:) I went online and ordered a slider potentiometer. Then I built this cool gadget using some long-forgotten hardware in the lab:</p>

<p><img src="/assets/posts/media-controller/teaser.jpeg" alt="Analog Media Controller" /></p>

<p>The buttons are for mute (gray), next track (yellow), previous track (blue), and play/pause (green). The awesome slider is obviously for volume control. Pretty neat and useful unit, always within my reach!</p>

<p>Source code and (some) instructions: <a href="https://github.com/bkakilli/ardu_media_control">github.com/bkakilli/ardu_media_control</a></p>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[An Arduino-based media controller unit for old-school types like myself.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://burak.kakilli.com/assets/posts/hello_world/steven-houston-d2lO9btumD4-unsplash.jpg" /><media:content medium="image" url="https://burak.kakilli.com/assets/posts/hello_world/steven-houston-d2lO9btumD4-unsplash.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Isolated Deep Learning Environment (with GUI) using Docker and VSCode</title><link href="https://burak.kakilli.com/docker-with-gpu/" rel="alternate" type="text/html" title="Isolated Deep Learning Environment (with GUI) using Docker and VSCode" /><published>2020-08-31T01:00:00-04:00</published><updated>2020-08-31T01:00:00-04:00</updated><id>https://burak.kakilli.com/gpu-with-docker</id><content type="html" xml:base="https://burak.kakilli.com/docker-with-gpu/"><![CDATA[<p>Isolation, portability, easy deployment. These sound perfect for production environments. But is it too much to ask this for R&amp;D environments too? I don’t think so. Yes, I admit that there may be too much of overhead time to time, especially for the ordinary tasks. But I can assure you that when you fire up your entire development environment with a single “<code class="language-plaintext highlighter-rouge">docker-compose up -d</code>” line on a completely new machine for first time, you will immediately see that it is a blessing. Especially if you may need to run your experiments on many different machines, this kind of portability helps a lot. In my case, I need to work on 8-12 different dedicated virtual machines on a GPU cluster environment. Dealing with dependencies and virtual environments every time was a disaster before switching to Docker. Now, I don’t even need login to my VMs since I build a Swarm cluster (which I will not cover here) so that I can just submit jobs from my local computer, which is guaranteed to be run on the same settings thanks to Docker.</p>

<p><img src="/assets/posts/docker-with-gpu/docker-training.png" alt="Training a model on a GPU in container" />
<em>Training a model on a GPU in container</em></p>

<!-- <img src="assets/posts/docker-with-gpu/docker-training.png" alt="drawing" style="maxwidth: 500px"/> -->

<p>How about GUI? Containers are often used to serve on the background and their most common usage does not require direct human interface. However, in our case we will be working in and of itself the container, and we most likely want to be able to use GUI for data or results visualization. Therefore, the container needs some proper instructions when we run it to turn on the GUI. This is already handled in the <code class="language-plaintext highlighter-rouge">docker-compose.yaml</code> file below. All you need to do is observe.</p>

<p><img src="/assets/posts/docker-with-gpu/docker-gui.png" alt="Training a model on a GPU in container" />
<em>Using GUI for data visualization in container</em></p>

<p>This example specifically covers to generate a PyTorch environment for deep learning. Your application might be different, but adapting the <code class="language-plaintext highlighter-rouge">Dockerfile</code> and others should be trivial.</p>

<h2 id="first-time-setup">First time setup</h2>
<ol>
  <li>Install Docker (v19.03+): https://docs.docker.com/engine/install/ubuntu/</li>
  <li>Install docker-compose:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install </span>docker-compose
</code></pre></div>    </div>
  </li>
  <li>Install <code class="language-plaintext highlighter-rouge">nvidia-container-runtime</code>:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt update<span class="p">;</span> <span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> nvidia-container-runtime
</code></pre></div>    </div>
  </li>
  <li>Edit <code class="language-plaintext highlighter-rouge">/etc/docker/daemon.json</code>
    <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
    </span><span class="nl">"runtimes"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
        </span><span class="nl">"nvidia"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
            </span><span class="nl">"path"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/usr/bin/nvidia-container-runtime"</span><span class="p">,</span><span class="w">  </span><span class="nl">"runtimeArgs"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
        </span><span class="p">}</span><span class="w">
    </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div>    </div>
  </li>
  <li>Rerun docker daemon:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl daemon-reload
<span class="nb">sudo </span>systemctl restart docker
</code></pre></div>    </div>
  </li>
</ol>

<h2 id="fire-up-the-environment">Fire up the environment</h2>

<p>Create a new folder and put the following <code class="language-plaintext highlighter-rouge">Dockerfile</code> and <code class="language-plaintext highlighter-rouge">docker-compose.yaml</code> in it. Note that the compose file mounts your local folder (<code class="language-plaintext highlighter-rouge">/home/burak/local_workspace</code>) as (<code class="language-plaintext highlighter-rouge">/workspace</code>) inside the container. Everything in that folder on the host machine will be visible inside <code class="language-plaintext highlighter-rouge">/workspace</code> in the container.</p>

<p><strong>Note:</strong> In order to avoid file ownership/permission issues, we will create the same user in the container. You can get the user id (UID) and group id (GID) of your user account on the host machine with following commands:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">id</span> <span class="nt">-u</span>   <span class="c"># For UID. Usually 1000, 1001 or above</span>
<span class="nb">id</span> <span class="nt">-g</span>   <span class="c"># For GID. Usually 1000, 1001 or above</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">Dockerfile</code></p>
<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> nvidia/cuda:10.2-cudnn8-devel-ubuntu18.04</span>

<span class="k">ENV</span><span class="s"> USER=#SET_USER_NAME</span>
<span class="k">ENV</span><span class="s"> UID=#SET_UID</span>
<span class="k">ENV</span><span class="s"> GUI=#SET_GID</span>

<span class="c"># Update your container and install required stuff</span>
<span class="k">RUN </span>apt update <span class="o">&amp;&amp;</span> <span class="se">\
</span>    apt upgrade <span class="nt">-y</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>    apt autoremove <span class="nt">-y</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>    apt <span class="nb">install</span> <span class="nt">-y</span> <span class="se">\
</span>        python3-pip <span class="se">\
</span>        python3-dev <span class="se">\
</span>        git <span class="se">\
</span>        libgl1-mesa-glx

<span class="c"># Nothing fancy. Just set python3 pip3 as the system default</span>
<span class="k">RUN </span>update-alternatives <span class="nt">--install</span> /usr/bin/python python /usr/bin/python3.6 1 <span class="o">&amp;&amp;</span> <span class="se">\
</span>    update-alternatives <span class="nt">--install</span> /usr/bin/pip pip /usr/bin/pip3 2 <span class="o">&amp;&amp;</span> <span class="se">\
</span>    update-alternatives <span class="nt">--auto</span> python <span class="o">&amp;&amp;</span> <span class="se">\
</span>    update-alternatives <span class="nt">--auto</span> pip

<span class="c"># Install Pytorch and other essential python libraries</span>
<span class="k">RUN </span>pip <span class="nb">install</span> <span class="nt">--upgrade</span> <span class="nt">--no-cache-dir</span> <span class="se">\
</span>        pip <span class="se">\
</span>        pylint <span class="se">\
</span>        h5py <span class="se">\
</span>        tqdm <span class="se">\
</span>        numpy <span class="se">\
</span>        scipy <span class="se">\
</span>        scikit-learn <span class="se">\
</span>        scikit-image <span class="se">\
</span>        open3d <span class="se">\
</span>        <span class="nv">torch</span><span class="o">==</span>1.5.0 <span class="se">\
</span>        <span class="nv">torchvision</span><span class="o">==</span>0.6 <span class="se">\
</span>        tensorboard

<span class="c"># Add local user</span>
<span class="k">ENV</span><span class="s"> HOME=/home/$USER</span>
<span class="k">RUN </span>useradd <span class="nt">-s</span> /bin/bash <span class="nt">-u</span> <span class="nv">$UID</span> <span class="nt">-g</span> <span class="nv">$GID</span> <span class="nt">-m</span> <span class="nv">$HOME</span> <span class="nv">$USER</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>    usermod <span class="nt">-aG</span> <span class="nb">sudo</span> <span class="nv">$USER</span>
<span class="k">USER</span><span class="s"> $USER</span>

<span class="k">WORKDIR</span><span class="s"> /workspace</span>
<span class="k">CMD</span><span class="s"> bash</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">docker-compose.yaml</code></p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">services</span><span class="pi">:</span>
    <span class="na">my_envirorment</span><span class="pi">:</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">my_images/pytorch</span>
        <span class="na">build</span><span class="pi">:</span> <span class="s">.</span>
        <span class="na">command</span><span class="pi">:</span> <span class="s">/bin/bash</span>
        <span class="na">restart</span><span class="pi">:</span> <span class="s">unless-stopped</span>                 <span class="c1"># Restart the container unless manually stopped</span>
        <span class="na">volumes</span><span class="pi">:</span>
          <span class="pi">-</span> <span class="s">/home/burak/local_workspace:/workspace</span>
          <span class="pi">-</span> <span class="s">/tmp/.X11-unix:/tmp/.X11-unix:rw</span>    <span class="c1"># GUI related</span>
        <span class="na">environment</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">PYTHONUNBUFFERED=1</span>                <span class="c1"># Required to tell the Python to flush to std output</span>
            <span class="pi">-</span> <span class="s">DISPLAY</span>                           <span class="c1"># GUI related</span>
            <span class="pi">-</span> <span class="s">QT_X11_NO_MITSHM=1</span>                <span class="c1"># GUI related</span>
            <span class="pi">-</span> <span class="s">NVIDIA_VISIBLE_DEVICES=all</span>        <span class="c1"># GUI related</span>
            <span class="pi">-</span> <span class="s">NVIDIA_DRIVER_CAPABILITIES=all</span>    <span class="c1"># GUI related</span>

        <span class="na">privileged</span><span class="pi">:</span> <span class="no">true</span>                        <span class="c1"># GUI related</span>
        <span class="na">shm_size</span><span class="pi">:</span> <span class="s">32G</span>                           <span class="c1"># Required for training</span>
        <span class="na">runtime</span><span class="pi">:</span> <span class="s">nvidia</span>                         <span class="c1"># nvidia-container-runtime needed</span>

        <span class="na">hostname</span><span class="pi">:</span> <span class="s">my_envirorment</span>
        
        <span class="na">stdin_open</span><span class="pi">:</span> <span class="no">true</span>                        <span class="c1"># same as 'docker run -i'</span>
        <span class="na">tty</span><span class="pi">:</span> <span class="no">true</span>                               <span class="c1"># same as 'docker run -t'</span>

    <span class="c1"># Optional tensorboard service</span>
    <span class="c1"># (you need to obtain tensorflow image: "docker pull tensorflow")</span>
    <span class="na">tensorboard</span><span class="pi">:</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">tensorflow/tensorflow:latest</span>
        <span class="na">command</span><span class="pi">:</span> <span class="s">tensorboard --logdir=/tb_logs --port=8008 --bind_all</span>
        <span class="na">volumes</span><span class="pi">:</span>
          <span class="pi">-</span> <span class="s">/home/burak/local_workspace/logs:/tb_logs</span>
        <span class="na">restart</span><span class="pi">:</span> <span class="s">unless-stopped</span>

        <span class="na">ports</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s2">"</span><span class="s">8008:8008"</span>

        <span class="na">hostname</span><span class="pi">:</span> <span class="s">tensorboard</span>
</code></pre></div></div>

<p>Fire up!</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-compose up <span class="nt">-d</span>
</code></pre></div></div>

<h2 id="connecting-to-your-environment-using-vscode">Connecting to your environment using VSCode</h2>
<p>VSCode is an amazing and game changer IDE/Text Editor. I will not go into much detail here on VSCode and its installation. Basically you need to have VSCode installed together with 2 of the following extensions:</p>
<ol>
  <li>Docker</li>
  <li>Visual Studio Code Remote - Containers</li>
</ol>

<p>After installing these, click on the “Remote Explorer” tab on the right, chose Containers at the top. You will see your running container there, click on it and open your folder to start.</p>

<p>You are ready to work on your isolated environment!</p>

<h2 id="extra-stuff">Extra stuff</h2>

<h3 id="deploy-on-another-machine">Deploy on another machine</h3>
<p>If you need to deploy your image to other machines, there are at least two ways. First is saving the image to a file, copying the image file to the target machine, then loading the image on the target. Second way, which is more standard, is using a local registry and letting other machines just pull your images as if they are pulling from Docker Hub.</p>

<h3 id="save-copy-load">Save, Copy, Load</h3>

<p>On the machine that you build your container:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker save <span class="nt">--output</span> my_images_pytorch.tar my_images/pytorch
</code></pre></div></div>
<p>It will generate <code class="language-plaintext highlighter-rouge">my_images_pytorch.tar</code> file the you should transfer that file to the other machine using scp, sftp etc.</p>

<p>Then on the target machine:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker load <span class="nt">--input</span> my_images_pytorch.tar
</code></pre></div></div>

<h3 id="use-registry">Use Registry</h3>

<p>A better way of deployment is use of a local registry. Simply create a registry instance on your local machine. Remember that this should keep running in order other machines to be able pull any image.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">-d</span> <span class="nt">-p</span> 5000:5000 <span class="nt">--restart</span><span class="o">=</span>always <span class="nt">--name</span> registry registry:2
</code></pre></div></div>
<p>It will start a registry instance on port <code class="language-plaintext highlighter-rouge">5000</code>. Even if you reboot your PC, it will be online thanks to the <code class="language-plaintext highlighter-rouge">--restart=always</code> parameter.</p>

<p>Once your registry is online, simply push your image into it. In order to do that, the image should be tagged properly first.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker tag my_images/pytorch 192.168.1.5:5000/my_images/pytorch
docker push 192.168.1.5:5000/my_images/pytorch
</code></pre></div></div>
<p>Now your image is up in the registry. Then in the target machine you will only need to do:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull 192.168.1.5:5000/my_images/pytorch
</code></pre></div></div>
<p>It will pull your image from your own registry on your local machine.</p>

<p><strong>IMPORTANT</strong>
Because the registry is not a secure one, we need to modify one thing on the remote machine once for the first time. We need basically need to make our remote machine to trust our local registry, by modifying <code class="language-plaintext highlighter-rouge">/etc/docker/daemon.json</code> file (on the remote/target machine), and adding following to the configuration: (Assume that the host IP is <code class="language-plaintext highlighter-rouge">192.168.1.5</code>)</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"insecure-registries"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"192.168.1.5:5000"</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[Create an isolated deep learning environment (GPU) with Docker and VSCode.]]></summary></entry><entry><title type="html">Hello, world!</title><link href="https://burak.kakilli.com/hello-world/" rel="alternate" type="text/html" title="Hello, world!" /><published>2020-08-20T00:00:00-04:00</published><updated>2020-08-20T00:00:00-04:00</updated><id>https://burak.kakilli.com/hello-world</id><content type="html" xml:base="https://burak.kakilli.com/hello-world/"><![CDATA[<p>The title is the ultimate phrase that thrills those who truly passionate about the digital world. Because, to me it is that blinding brightness that welcomes you when you opened the big door to the outside world. It feels that it is the beginning of endless possibilities. I know it is the common practice to use it on the coding tutorials. But nothing holds you to feel the same way when you begin, well, virtually anything. That’s why I would like to salute my adventure with this phrase since it perfectly represents my excitement right now.</p>

<p><img src="/assets/posts/hello_world/hello_world2.png" alt="Hello, World!" /></p>

<p>Today’s date is August 20, 2020 and it’s midnight. I finally started to spit out the first words of my long overdue blog. Something tells me that today will not be the day of me finishing this page and publish. We will see. I am not a romantic type. In fact, I could be the opposite of romantic since my best man used to call me “Mr. Reason” when we were in college. And I am definitely not a man of letters. I hated writing during the school years (and my teachers hated my writing too, or perhaps they couldn’t read it. Anyways.) but since it is a part of my academic career now, I am kind of used to it. Besides, I feel that expressing myself with texts is usually simpler and more effective way to convey my thoughts to others (until we eventually invent an alternative and more efficient communication channel among each other). On top of everything, my main motive with this blog is to have a log of my life for my records and loved ones.</p>

<p>Enough of pretty words. My awesome cool blog may contain posts about virtually anything happened in my life. But I will likely to put things more about my work and digital/tech stuff. I am a strong believer of “the best way to learn is to teach”. I am not going to try creating perfect learning materials here. But instead there will be some keynotes and reference points, especially for myself, to digest stuff and recall when I need them later. If I can put a quality and original work, I will also share it on public domains, like Medium. So, the main audience of this blog is myself. But don’t hesitate to be a part of it!</p>]]></content><author><name>Burak Kakillioglu</name><email>bkakillioglu@gmail.com</email></author><summary type="html"><![CDATA[My awesome cool blog may contain posts about virtually anything happened in my life. But more on my work, digital stuff, technology and gadgets.]]></summary></entry></feed>